Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
$ sudo yum install -y fio
or
$ sudo yum install -y daos-tests

run fio

No Format
Code Block
languagebash
$ dmg pool create --size=10G
$ daos cont create --pool=$DAOS_POOL --type=POSIX
$ daos cont query --pool=$DAOS_POOL --cont=$DAOS_CONT
Pool UUID: f688f2ad-76ae-4368-8d1b-5697ca016a43
Container UUID: bcc5c793-60dc-4ec1-8bab-9d63ea18e794
Number of snapshots: 0
Latest Persistent Snapshot: 0
Highest Aggregated Epoch: 0
Container redundancy factor: 0
$ /usr/bin/mkdir /tmp/daos_test1
$ /usr/bin/touch /tmp/daos_test1/testfile
$ /usr/bin/df -h -t fuse.daos
df: no file systems processed
$ /usr/bin/dfuse --m=/tmp/daos_test1 --pool=<pool-uuid>$DAOS_POOL --cont=<cont-uuid>$DAOS_CONT
$ /usr/bin/df -h -t fuse.daos
Filesystem Size Used Avail Use% Mounted on
dfuse 954M 144K 954M 1% /tmp/daos_test1
$ /usr/bin/fio --name=random-write --ioengine=pvsync --rw=randwrite --bs=4k --size=128M --nrfiles=4 --directory=/tmp/daos_test1 --numjobs=8 --iodepth=16 --runtime=60 --time_based --direct=1 --buffered=0 --randrepeat=0 --norandommap --refill_buffers --group_reportingrandom-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=pvsync, iodepth=16
...
fio-3.7
Starting 8 processes
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
write: IOPS=19.9k, BW=77.9MiB/s (81.7MB/s)(731MiB/9379msec)
clat (usec): min=224, max=6539, avg=399.16, stdev=70.52
lat (usec): min=224, max=6539, avg=399.19, stdev=70.52
clat percentiles (usec):
...
bw ( KiB/s): min= 9368, max=10096, per=12.50%, avg=9972.06, stdev=128.28, samples=144
iops : min= 2342, max= 2524, avg=2493.01, stdev=32.07, samples=144
lat (usec) : 250=0.01%, 500=96.81%, 750=3.17%, 1000=0.01%
lat (msec) : 10=0.01%
cpu : usr=0.43%, sys=1.05%, ctx=187242, majf=0, minf=488
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,187022,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
WRITE: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=731MiB (766MB), run=9379-9379msec
No Format
Code Block
languagebash
# Data after fio completed
$ ll /tmp/daos_test1
total 1048396
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.0.0
rw-rr- 1 user1 user1 33546240 Apr 21 23:28 random-write.0.1
rw-rr- 1 user1 user1 33542144 Apr 21 23:28 random-write.0.2
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.0.3
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.1.0
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.1.1
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.1.2
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.1.3
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.2.0
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.2.1
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.2.2
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.2.3
rw-rr- 1 user1 user1 33542144 Apr 21 23:28 random-write.3.0
rw-rr- 1 user1 user1 33550336 Apr 21 23:28 random-write.3.1
rw-rr- 1 user1 user1 33550336 Apr 21 23:28 random-write.3.2
rw-rr- 1 user1 user1 33542144 Apr 21 23:28 random-write.3.3
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.4.0
rw-rr- 1 user1 user1 33525760 Apr 21 23:28 random-write.4.1
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.4.2
rw-rr- 1 user1 user1 33550336 Apr 21 23:28 random-write.4.3
rw-rr- 1 user1 user1 33542144 Apr 21 23:28 random-write.5.0
rw-rr- 1 user1 user1 33546240 Apr 21 23:28 random-write.5.1
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.5.2
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.5.3
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.6.0
rw-rr- 1 user1 user1 33550336 Apr 21 23:28 random-write.6.1
rw-rr- 1 user1 user1 33550336 Apr 21 23:28 random-write.6.2
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.6.3
rw-rr- 1 user1 user1 33525760 Apr 21 23:28 random-write.7.0
rw-rr- 1 user1 user1 33554432 Apr 21 23:28 random-write.7.1
rw-rr- 1 user1 user1 33525760 Apr 21 23:28 random-write.7.2
rw-rr- 1 user1 user1 33542144 Apr 21 23:28 random-write.7.3

unmount

Code Block
languagebash
$ /usr/bin/fusermount -u /tmp/daos_test1/

$ /usr/bin/df -h -t fuse.daos
df: no file systems processed








Test with Mpirun

required rpms

No Formatcode
languagebash
$ sudo yum install -y mpich
$ sudo yum install -y mdtest
$ sudo yum install -y Lmod
$ sudo module load mpi/mpich-x86_64
$ /usr/bin/touch /tmp/daos_test1/testfile

run mpirun ior 

Code Blocknoformat
languagebash
$ /usr/lib64/mpich/bin/mpirun -host <host1> -np 30 ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M
IOR-3.4.0+dev: MPI Coordinated Test of Parallel I/O
Began : Fri Apr 16 18:07:56 2021
Command line : ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M
Machine : Linux boro-8.boro.hpdd.intel.com
Start time skew across all tasks: 0.00 sec
TestID : 0
StartTime : Fri Apr 16 18:07:56 2021
Path : /tmp/daos_test1/testfile
FS : 3.8 GiB Used FS: 1.1% Inodes: 0.2 Mi Used Inodes: 0.1%
Participating tasks : 30
Options:
api : POSIX
apiVersion :
test filename : /tmp/daos_test1/testfile
access : single-shared-file
type : independent
segments : 1
ordering in a file : sequential
ordering inter file : no tasks offsets
nodes : 1
tasks : 30
clients per node : 30
repetitions : 1
xfersize : 25 MiB
blocksize : 25 MiB
aggregate filesize : 750 MiB
verbose : 1
Results:
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----
Commencing write performance test: Fri Apr 16 18:07:56 2021
write 1499.68 59.99 0.480781 25600 25600 0.300237 0.500064 0.483573 0.500107 0
Max Write: 1499.68 MiB/sec (1572.53 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNum
write 1499.68 1499.68 1499.68 0.00 59.99 59.99 59.99 0.00 0.50011 NA NA 0 30 30 1 0 0 1 0 0 1 26214400 26214400 750.0 POSIX 0
Finished : Fri Apr 16 18:07:57 2021


Mpirun mdtest
$ /usr/lib64/mpich/bin/mpirun -host <host1> -np 30 mdtest -a DFS -z 0 -F -C -i 1 -n 1667 -e 4096 -d / -w 4096 --dfs.chunk_size 1048576 --dfs.cont <container.uuid> --dfs.destroy --dfs.dir_oclass RP_3G1 --dfs.group daos_server --dfs.oclass RP_3G1 --dfs.pool <pool_uuid>
– started at 04/16/2021 22:01:55 –
mdtest-3.4.0+dev was launched with 30 total task(s) on 1 node(s)
Command line used: mdtest 'a' 'DFS' '-z' '0' '-F' '-C' '-i' '1' '-n' '1667' '-e' '4096' '-d' '/' '-w' '4096' 'dfs.chunk_size' '1048576' 'dfs.cont' '3e661024-2f1f-4d7a-9cd4-1b05601e0789' 'dfs.destroy' 'dfs.dir_oclass' 'SX' 'dfs.group' 'daos_server' 'dfs.oclass' 'SX' '-dfs.pool' 'd546a7f5-586c-4d8f-aecd-372878df7b97'
WARNING: unable to use realpath() on file system.
Path:
FS: 0.0 GiB Used FS: -nan% Inodes: 0.0 Mi Used Inodes: -nan%
Nodemap: 111111111111111111111111111111
30 tasks, 50010 files
SUMMARY rate: (of 1 iterations)
Operation Max Min Mean Std Dev
--------- — — ---- -------
File creation : 14206.584 14206.334 14206.511 0.072
File stat : 0.000 0.000 0.000 0.000
File read : 0.000 0.000 0.000 0.000
File removal : 0.000 0.000 0.000 0.000
Tree creation : 1869.791 1869.791 1869.791 0.000
Tree removal : 0.000 0.000 0.000 0.000
– finished at 04/16/2021 22:01:58 –

$ /usr/lib64/mpich/bin/mpirun -host <host1> -np 50 mdtest -a DFS -z 0 -F -C -i 1 -n 1667 -e 4096 -d / -w 4096 --dfs.chunk_size 1048576 --dfs.cont 3e661024-2f1f-4d7a-9cd4-1b05601e0789 --dfs.destroy --dfs.dir_oclass SX --dfs.group daos_server --dfs.oclass SX --dfs.pool d546a7f5-586c-4d8f-aecd-372878df7b97
– started at 04/16/2021 22:02:21 –
mdtest-3.4.0+dev was launched with 50 total task(s) on 1 node(s)
Command line used: mdtest 'a' 'DFS' '-z' '0' '-F' '-C' '-i' '1' '-n' '1667' '-e' '4096' '-d' '/' '-w' '4096' 'dfs.chunk_size' '1048576' 'dfs.cont' '3e661024-2f1f-4d7a-9cd4-1b05601e0789' 'dfs.destroy' 'dfs.dir_oclass' 'SX' 'dfs.group' 'daos_server' 'dfs.oclass' 'SX' '-dfs.pool' 'd546a7f5-586c-4d8f-aecd-372878df7b97'
WARNING: unable to use realpath() on file system.
Path:
FS: 0.0 GiB Used FS: -nan% Inodes: 0.0 Mi Used Inodes: -nan%
Nodemap: 11111111111111111111111111111111111111111111111111
50 tasks, 83350 files
SUMMARY rate: (of 1 iterations)
Operation Max Min Mean Std Dev
--------- — — ---- -------
File creation : 13342.303 13342.093 13342.228 0.059
File stat : 0.000 0.000 0.000 0.000
File read : 0.000 0.000 0.000 0.000
File removal : 0.000 0.000 0.000 0.000
Tree creation : 1782.938 1782.938 1782.938 0.000
Tree removal : 0.000 0.000 0.000 0.000
– finished at 04/16/2021 22:02:27

Run with 2 DAOS ranks server:

No Format
(Agent)
$ sudo systemctl disable daos_agent.service
$ sudo systemctl enable daos_agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/daos_agent.service to /usr/lib/systemd/system/daos_agent.service.
$ sudo systemctl start daos_agent.service
$ systemctl is-active daos_agent.service
active

(Servers: all server ranks)
$ sudo -n systemctl stop daos_server.service
$ sudo -n systemctl disable daos_server.service
$ sudo -n systemctl enable daos_server.service
$ systemctl is-active daos_server.service
inactive
$ sudo -n systemctl start daos_server.service
$ systemctl is-active daos_server.service
active

$ /usr/bin/dmg -o /etc/daos/daos_control.yml -d storage format
DEBUG 22:53:45.739455 main.go:217: debug output enabled
DEBUG 22:53:45.739622 main.go:244: control config loaded from /etc/daos/daos_control.yml
DEBUG 22:53:45.740018 system.go:406: DAOS system query request: &{unaryRequest:{request:{deadline:{wall:0 ext:0 loc:<nil>} Sy
s: HostList:[]} rpc:0x9bea00} msRequest:{} sysRequest:{Ranks:{RWMutex:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount
:0 readerWait:0} HostSet:{Mutex:{state:0 sema:0} list:0xc0002cd280}} Hosts:{Mutex:{state:0 sema:0} list:0xc0002cd240}} retrya
bleRequest:{retryTimeout:0 retryInterval:0 retryMaxTries:0 retryTestFn:0x9beb20 retryFn:0x9bec20} FailOnUnavailable:true}
DEBUG 22:53:45.740144 rpc.go:196: request hosts: [boro-52:10001]
DEBUG 22:53:45.773468 rpc.go:196: request hosts: [boro-8:10001 boro-52:10001]
Format Summary:
  Hosts       SCM Devices NVMe Devices 
  -----       ----------- ------------ 
  boro-[9,52] 1           0            

$ /usr/bin/dmg pool list
no pools in system

$ dmg pool create --size=5G
Creating DAOS pool with automatic storage allocation: 5.0 GB NVMe + 6.00% SCM
Pool created with 100.00% SCM/NVMe ratio
-----------------------------------------
  UUID          : 60ef4faa-72cb-43c1-9162-9236e6bb28f2
  Service Ranks : 0                                   
  Storage Ranks : 0                                   
  Total Size    : 5.0 GB                              
  SCM           : 5.0 GB (5.0 GB / rank)              
  NVMe          : 0 B (0 B / rank)            

$ dmg pool list
Pool UUID                            Svc Replicas 
---------                            ------------ 
60ef4faa-72cb-43c1-9162-9236e6bb28f2 0           

$ daos cont create --pool=$DAOS_POOL --type=POSIX --oclass=SX
Successfully created container e3d57d0e-38a2-48e5-8601-9810214eb946

$ daos pool list-containers --pool=<pool-id>
 e3d57d0e-38a2-48e5-8601-9810214eb946

$ df -h -t fuse.daos
df: no file systems processed

$ dfuse --m=/tmp/daos_test1 --pool=$DAOS_POOL --cont=$DAOS_CONT

$ df -h -t fuse.daos
Filesystem      Size  Used Avail Use% Mounted on
dfuse           9.4G  549K  9.4G   1% /tmp/daos_test1

$ fio --name=random-write --ioengine=pvsync --rw=randwrite --bs=4k --size=128M --nrfiles=4 --directory=/tmp/daos_test1 --numjobs=8 --iodepth=16 --runtime=60 --time_based --direct=1 --buffered=0 --randrepeat=0 --norandommap --refill_buffers --group_reporting
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=pvsync, iodepth=16
...
fio-3.7
Starting 8 processes
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
random-write: Laying out IO files (4 files / total 128MiB)
Jobs: 8 (f=32): [w(8)][98.4%][r=0KiB/s,w=83.0MiB/s][r=0,w=21.5k IOPS][eta 00m:01s]
random-write: (groupid=0, jobs=8): err= 0: pid=17424: Fri Apr 16 23:00:37 2021
  write: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(5062MiB/60001msec)
    clat (usec): min=224, max=5982, avg=368.72, stdev=59.60
     lat (usec): min=224, max=5982, avg=368.80, stdev=59.60
    clat percentiles (usec):  
...
bw ( KiB/s): min= 9808, max=10984, per=12.50%, avg=10798.05, stdev=97.24, samples=952
iops : min= 2452, max= 2746, avg=2699.50, stdev=24.31, samples=952
lat (usec) : 250=0.05%, 500=98.19%, 750=1.75%, 1000=0.01%
lat (msec) : 4=0.01%, 10=0.01%
cpu : usr=0.69%, sys=1.51%, ctx=1296165, majf=0, minf=308
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1295786,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=5062MiB (5308MB), run=60001-60001msec

No Format
$ /usr/lib64/mpich/bin/mpirun -host <client-host> -np 1 ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M
IOR-3.4.0+dev: MPI Coordinated Test of Parallel I/O
Began : Fri Apr 16 23:19:56 2021
Command line : ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M
Machine : Linux boro-8.boro.hpdd.intel.com
Start time skew across all tasks: 0.00 sec
TestID : 0
StartTime : Fri Apr 16 23:19:56 2021
Path : /tmp/daos_test1/testfile
FS : 3.8 GiB Used FS: 1.1% Inodes: 0.2 Mi Used Inodes: 0.1%
Participating tasks : 1
Options:
api : POSIX
apiVersion :
test filename : /tmp/daos_test1/testfile
access : single-shared-file
type : independent
segments : 1
ordering in a file : sequential
ordering inter file : no tasks offsets
nodes : 1
tasks : 1
clients per node : 1
repetitions : 1
xfersize : 25 MiB
blocksize : 25 MiB
aggregate filesize : 25 MiB
verbose : 1
Results:
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----
Commencing write performance test: Fri Apr 16 23:19:56 2021
write 1643.22 65.88 0.015179 25600 25600 0.000016 0.015179 0.000007 0.015214 0
Max Write: 1643.22 MiB/sec (1723.04 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNum
write 1643.22 1643.22 1643.22 0.00 65.73 65.73 65.73 0.00 0.01521 NA NA 0 1 1 1 0 0 1 0 0 1 26214400 26214400 25.0 POSIX 0
Finished : Fri Apr 16 23:19:56 2021
$ /usr/lib64/mpich/bin/mpirun -hostfile /tmp/hostfile -np 30 ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M
IOR-3.4.0+dev: MPI Coordinated Test of Parallel I/O
Began : Fri Apr 16 23:21:53 2021
Command line : ior -a POSIX -b 26214400 -v -w -k -i 1 -o /tmp/daos_test1/testfile -t 25M
Machine : Linux boro-8.boro.hpdd.intel.com
Start time skew across all tasks: 0.00 sec
TestID : 0
StartTime : Fri Apr 16 23:21:53 2021
Path : /tmp/daos_test1/testfile
FS : 3.8 GiB Used FS: 1.1% Inodes: 0.2 Mi Used Inodes: 0.1%
Participating tasks : 30
Options:
api : POSIX
apiVersion :
test filename : /tmp/daos_test1/testfile
access : single-shared-file
type : independent
segments : 1
ordering in a file : sequential
ordering inter file : no tasks offsets
nodes : 1
tasks : 30
clients per node : 30
repetitions : 1
xfersize : 25 MiB
blocksize : 25 MiB
aggregate filesize : 750 MiB
verbose : 1
Results:
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----
Commencing write performance test: Fri Apr 16 23:21:53 2021
write 1473.23 58.93 0.471202 25600 25600 0.306893 0.509040 0.488975 0.509086 0
Max Write: 1473.23 MiB/sec (1544.79 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Max(OPs) Min(OPs) Mean(OPs) StdDev Mean(s) Stonewall(s) Stonewall(MiB) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggs(MiB) API RefNum
write 1473.23 1473.23 1473.23 0.00 58.93 58.93 58.93 0.00 0.50909 NA NA 0 30 30 1 0 0 1 0 0 1 26214400 26214400 750.0 POSIX 0
Finished : Fri Apr 16 23:21:54 2021

$ dmg pool create --scm-size=5G
Creating DAOS pool with manual per-server storage allocation: 5.0 GB SCM, 0 B NVMe (100.00% ratio)
Pool created with 100.00% SCM/NVMe ratio
-----------------------------------------
UUID : ea834679-ec12-42b5-840b-4d6ec9c4911a
Service Ranks : 0
Total Size : 10 GB
SCM : 10 GB (5.0 GB / rank)
NVMe : 0 B (0 B / rank)

$ dmg pool list
Pool UUID Svc Replicas
--------- ------------
ea834679-ec12-42b5-840b-4d6ec9c4911a 0

$ daos cont create --pool=<pool-id> --type=POSIX --oclass=SX
Successfully created container e204a35f-6846-474f-8dea-6f672a23f19b

$ /usr/lib64/mpich/bin/mpirun -host <host1> -np 30 mdtest -a DFS -z 0 -F -C -i 1 -n 1667 -e 4096 -d / -w 4096 --dfs.chunk_size 1048576 --dfs.cont <cont-id> --dfs.destroy --dfs.dir_oclass SX --dfs.group daos_server --dfs.oclass SX --dfs.pool <pool-id>
– started at 04/17/2021 00:21:20 –
mdtest-3.4.0+dev was launched with 30 total task(s) on 1 node(s)
Command line used: mdtest 'a' 'DFS' '-z' '0' '-F' '-C' '-i' '1' '-n' '1667' '-e' '4096' '-d' '/' '-w' '4096' 'dfs.chunk_size' '1048576' 'dfs.cont' 'e204a35f-6846-474f-8dea-6f672a23f19b' 'dfs.destroy' 'dfs.dir_oclass' 'SX' 'dfs.group' 'daos_server' 'dfs.oclass' 'SX' '-dfs.pool' 'ea834679-ec12-42b5-840b-4d6ec9c4911a'
WARNING: unable to use realpath() on file system.
Path:
FS: 0.0 GiB Used FS: -nan% Inodes: 0.0 Mi Used Inodes: -nan%
Nodemap: 111111111111111111111111111111
30 tasks, 50010 files
SUMMARY rate: (of 1 iterations)
Operation Max Min Mean Std Dev
--------- — — ---- -------
File creation : 21592.376 21592.297 21592.349 0.024
File stat : 0.000 0.000 0.000 0.000
File read : 0.000 0.000 0.000 0.000
File removal : 0.000 0.000 0.000 0.000
Tree creation : 1560.798 1560.798 1560.798 0.000
Tree removal : 0.000 0.000 0.000 0.000
– finished at 04/17/2021 00:21:22

Run with 4 DAOS ranks server, rank-rebuild with dfuse_io and mpirun_ior

...