Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Tip of master, commit 267fa2c1343f5f72a36d5a673c7c880143fee0fe

All tests run with ofi+psm2, ib0.

daos_test: Run with 8 server (boro-[4-11]), 2 client (boro-[12-13]). Killed servers, cleaned /mnt/daos in between runs listed below.

Tests requiring pool to be created via dmg used 4GB pool. These used boro-12 as client.

mpich tests used boro-4 as server, boro-12 as client, with a 1GB pool.

Test Results

daos_test

Separate runs with cleanup in between:

  • -mpcCAeoRd - PASS
  • -i - FAIL, still rebuilding on IO27 after 10 minutes
    • Error rendering macro 'jira' : Unable to locate Jira server for this macro. It may be due to Application Link configuration.
  • -r - same as -i
  • -O - PASS

daosperf

1K Records

CREDITS=1

[sdwillso@boro-4 ~]$ orterun --mca mtl ^psm2,ofi -np 1 -quiet --hostfile ~/scripts/host.cli.1 --ompi-server file:~/scripts/uri.txt -x DD_SUBSYS= -x DD_MASK= -x D_LOG_FILE=/tmp/daos_perf.log daos_perf -T daos -P 2G -d 1 -a 200 -r 1000 -s 1K -C 1 -t -z
Test :
	DAOS (full stack)
Parameters :
	pool size     : 2048 MB
	credits       : 1 (sync I/O for -ve)
	obj_per_cont  : 1 x 1 (procs)
	dkey_per_obj  : 1
	akey_per_dkey : 200
	recx_per_akey : 1000
	value type    : single
	value size    : 1024
	zero copy     : yes
	overwrite     : yes
	verify fetch  : no
	VOS file      : <NULL>
884d80cb: rank 1 became pool service leader 0
Started...
update successfully completed:
	duration : 95.735795  sec
	bandwith : 2.040      MB/sec
	rate     : 2089.08    IO/sec
	latency  : 478.679    us (nonsense if credits > 1)
Duration across processes:
	MAX duration : 95.735795  sec
	MIN duration : 95.735795  sec
	Average duration : 95.735795  sec
884d80cb: rank 1 no longer pool service leader 0

CREDITS=8

  • Unable to locate Jira server for this macro. It may be due to Application Link configuration.
  • Bug is fixed in patch that's not yet merged to master

4K Records

CREDITS=1

  • Unable to locate Jira server for this macro. It may be due to Application Link configuration.
  • Bug is fixed in patch that's not yet merged to master

IOR, 50GB pool, data verification enabled

[sdwillso@boro-4 ~]$ orterun -x FI_PSM2_DISCONNECT=1 -N 1 --hostfile ~/hostlists/daos_client_hostlist --mca mtl ^psm2,ofi  --ompi-server file:~/scripts/uri.txt ior -v -W -i 1 -a DAOS -w -o `uuidgen` -b 5g -t 1m -- -p 0a410b8a-327c-4c71-8ba7-4230f390cd7d -v 1 -r 1m -s 1m -c 1024 -a 16 -o LARGE -e 1
ior WARNING: assuming POSIX-based backend for DAOS statfs call.
ior WARNING: assuming POSIX-based backend for DAOS mkdir call.
ior WARNING: assuming POSIX-based backend for DAOS rmdir call.
ior WARNING: assuming POSIX-based backend for DAOS access call.
ior WARNING: assuming POSIX-based backend for DAOS stat call.
ior WARNING: assuming POSIX-based backend for DAOS statfs call.
ior WARNING: assuming POSIX-based backend for DAOS mkdir call.
ior WARNING: assuming POSIX-based backend for DAOS rmdir call.
ior WARNING: assuming POSIX-based backend for DAOS access call.
ior WARNING: assuming POSIX-based backend for DAOS stat call.
IOR-3.1.0: MPI Coordinated Test of Parallel I/O
Began               : Thu Sep 27 22:08:13 2018
Command line        : ior -v -W -i 1 -a DAOS -w -o 4d47861b-127a-4693-8640-daa713492938 -b 5g -t 1m -- -p 0a410b8a-327c-4c71-8ba7-4230f390cd7d -v 1 -r 1m -s 1m -c 1024 -a 16 -o LARGE -e 1
Machine             : Linux boro-12.boro.hpdd.intel.com
Start time skew across all tasks: 14690266.15 sec
TestID              : 0
StartTime           : Thu Sep 27 22:08:13 2018
Path                : /home/sdwillso
FS                  : 3.8 TiB   Used FS: 13.5%   Inodes: 250.0 Mi   Used Inodes: 3.0%
Participating tasks: 2
[0] WARNING: USING daosStripeMax CAUSES READS TO RETURN INVALID DATA

Options: 
api                 : DAOS
apiVersion          : DAOS
test filename       : 4d47861b-127a-4693-8640-daa713492938
access              : single-shared-file
type                : independent
segments            : 1
ordering in a file  : sequential
ordering inter file : no tasks offsets
tasks               : 2
clients per node    : 1
repetitions         : 1
xfersize            : 1 MiB
blocksize           : 5 GiB
aggregate filesize  : 10 GiB

Results: 

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   --------   ----
Commencing write performance test: Thu Sep 27 22:08:14 2018
write     4484       5242880    1024.00    0.061408   2.20       0.027056   2.28       0   
Verifying contents of the file(s) just written.
Thu Sep 27 22:08:16 2018

remove    -          -          -          -          -          -          0.000067   0   
Max Write: 4483.50 MiB/sec (4701.29 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
write        4483.50    4483.50    4483.50       0.00    4483.50    4483.50    4483.50       0.00    2.28393     0      2   1    1   0     0        1         0    0      1 5368709120  1048576   10240.0 DAOS      0
Finished            : Thu Sep 27 22:08:25 2018

daos_bench

kv-idx-update

kv-dkey-update

kv-akey-update

kv-dkey-fetch

kv-akey-fetch

CaRT Self-Test

Small IO

Large IO Bulk PUT

Large IO Bulk GET

mpich tests

  • No labels