Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ItemDescription
FabricPreferred PSM2
Number of Servers2 to 6
Drives per serversWe have few system which has multiple drives so run first IO tests with 2 NVMe per servers if possible
Number of Client4 to 8 and use 32 process per CN.
daos_nvme.conf

Use the default option for now.

[Nvme]
TransportID "trtype:PCIe traddr:0000:81:00.0" Nvme0
TransportID "trtype:PCIe traddr:0000:82:00.0" Nvme1
TimeoutUsec 0
ActionOnTimeout None
AdminPollRate 100000
HotplugEnable No
HotplugPollRate 0

Minimum Pool Size1G

Tests:

TestConditionData InputCommentsTest Area
UnitTest

src/vos/vea/tests/vea_ut

src/eio/smd/tests/smd_ut

Look at the DAOS-1246 and include all unit tests

for NVMe

.

daos_epoch_discard() will also create the more fragmentation. So add unit test specific to that API call.

[daos_epoch_discard() has been removed from the code so unit test coed is no more needed]Unit tests
Exiting Functional Test
Identify and run existing functional test cases with NVMeManual Test is in Progress, once it's done will update the test case information here. (Patch is ready https://review.hpdd.intel.com/#/c/33669/ but can not merge until we get solution from DCO team on DCO-8268 )SAMIR
HardwareRun Soak testing and regression testing with mix of Apache pass + NVMe (Close to final A21 server memory sizes)
Make sure to account any hardware changes in terms of memory,CPU, drives, fabric.SOAK Testing
I/OCreate the Pool with small NVMe size (0/1G/48G)

Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe

Write/Read data with non standard sizes

4025

4025B,259K,1.1M,30.22M [Random/Sequential] and make sure it

does not

use NVMe

For second case, data sizes can be generated random instead of predefined fixed size.Saurabh

Write/Read data (>4K) 1M/16G [Random/Sequential] and make sure it use NVMe

Write/Read data with non standard sizes 4025,259K,1.1M,30.22M [Random/Sequential] and make sure it

does not

use NVMe

For second case, data sizes can be generated random instead of predefined fixed size.Saurabh
Create the Pool with
Large
Maximum NVMe size
(1TB/2TB)
.

Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe

Write/Read data with non standard sizes 4025,259K,1.1M,30.22M [Random/Sequential] and make sure it

does not

use NVMe

For second case, data sizes can be generated random instead of predefined fixed size.

Use the maximum pool size based on NVMe size on each server and number of total servers.

Saurabh

Write/Read data (>4K) 1M/16G/1TB [Random/Sequential] and make sure it use NVMe

Write/Read data with non standard sizes 4025,259K,1.1M,30.22M [Random/Sequential] and make sure it

does not

use NVMe

For second case, data sizes can be generated random instead of predefined fixed size.Saurabh
Unaligned IO

Try using the offset from API or use the core Python API to modify the existing Array and read through.

Write 1M, modified 1bytes in different offsets. read through.

The test code daos_run_io_conf.c will be doing similar thing so worth to use same.

unit tests (Some more recovery tests talk Di)
I/O with Server/System restartCreate the Pool with small NVMe size (0/1G/48G)Write/Read data (<4K) 1B/1K/4KWrite the data/ Stop
Server
Servers/Start
Server
Servers/Read data back and check data integrity.Saurabh

Write/Read data (>4K) 1M/16G 

Write the data/ Reboot the node/ Start

server

servers/read data back and check data integrity.

After server start SPDK need to be setup again.

Saurabh
Create the Pool with
Large
Maximum NVMe size
(1TB/2TB)
. Write/Read data (<4K) 1B/1K/4K

Write the data/ Stop Server/Start Server/Read data back and check data integrity.

Use the maximum pool size based on NVMe size on each server and number of total servers.

Saurabh

Write/Read data (>4K) 1M/16G 

Write the data/ Reboot the node/ Start server/read data back and check data integrity.

After server start SPDK need to be setup again.

Saurabh
Create the Pool with
Large
Maximum NVMe size
(1TB/2TB)
.

Write single IOR data set

,

Read single IOR data set

Kill the server while IO is doing Write and start the server after 5-10 min. IO should continue after server start? Do the same when read is in progress.

Use the maximum pool size based on NVMe size on each server and number of total servers.

Rebuild
Verify application timeouts when server are powered downAny data sizes but use more threads to load the data at the same time.Kill the server and wait till application gets timeout. Need to find out the timeout value to drop RPC connection.Rebuild
Create the Pool with
Large
Maximum NVMe size
(1TB/2TB)
. Write multiple IOR data-sets, Read multiple IOR data-sets, Read-Write Together (Real world scenarios). Use different applications together if available instead of just IOR in different threads . Kill the server while IO is doing multiple write and start the server after 5-10 min. IO should continue after server start
?
. Do the same when Multiple Read is in progress.

PMDK may have some API to get the allocated but it wont be easy to use as it is. There is Pool Query which will provide in future which can be track the size to verify memory leak.

We may need fault injection layer to control the PMDK transaction with server start/stop. 

Example:

1> Create 3G pool

2> Write 1G data and before it goes for transaction commit shutdown the server.

3> Next server start it should claim the full 3G space 

Soak
Re-written data fetch validation

Write the data on NVMe >4K

Re-write using the same array with small size ~1-2 bytes which will go through SCM.

Do this and change ~100 bytes with the different data

Do fetch which will combined the record and verify it

.Do the similar thing like writing small data set to SCM and overwrite large data to NVMe, and validate the content

.

When overwriting the data will be kind of new epoch entry getting created. But it will use the old data set and update the new bytes value only.

During fetch Epoch will aggregated and provide the result with modified bytes.


unit tests
Re-written data fetch validation

Write the data on SCM <4K

Extend the data set using the same array with larger size >8K which will go through NVMe

Do this and repeat for few times with the different data

Fetch the

dataset

data-set which will combined the records,

Verify all the old+new records.

verify it.

unit tests

Verify there is no leak after PMDK Transaction (This can not be tested until we get some hooks to control PMDK transaction)

PMDK may have some API to get the allocated but it wont be easy to use as it is. There is Pool Query which will provide in future which can be track the size to verify memory leak.

Example:

1> Create 3G pool

2> Write 1G data and before it goes for transaction commit shutdown the server.

3> Next when server start verify it should claim the full 3G space.

We

may

need fault injection layer to control the PMDK transaction with server start/stop.

 

 Which can do server stop before doing PMDK transaction

unit tests( Talk to Jeff/Di)
Large number of Pool with Server/System restartCreate the large number of pools (10000) with different NVMe sizesWrite mixed data across all the pools (1K/4K/1M/1T)Write the data/ Stop Server/Start Server/Read data back and check data integrity. Pool tests
Write the data/ Reboot the node/ Start server/read data back and check data integrity.Pool tests
Pool CapacityCreate the NVMe pool size 1GBWrite IO > 1GB which should failed with ENOM SPACE
Pool tests
Create pool same size as NVMe drive
Write IO till pool is getting filled up, once the Drive is full it should not allow to write more data with ENOM_SPACEPool tests
Create the pool with maximum of NVMe size and delete.Run this in loop for example if NVMe is 2TB, create the pool size of 1TB, 500GB, 500GB, delete all the pools. Do this in a loop and make sure pool creation work and size can be reclaimed.
Pool tests
Verify the Fragmentation.

Create the pool with 12GB.

Add 8GB size of data on NVMe and 4G on SCM. IO has to be written with different container (~100) with different sizes.

use more smaller sizes for NVMe (4K-32K) and for SCM (8Bytes-4K) to have more fragmentation.



Fragmentation information will be needed from pool query in future to validate fragmentation number. 

unit tests (Talk to Niu)
Pool ExtendExtend the single pool to multiple targetsCreate the few data set on single pool (1K/4K/1M/1T). Extend the pool to all target at once.Verify the data integrity after pool extension donePool Test

Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by oneVerify the data integrity after pool extension donePool Test
Extend the multiple pools to targetsCreate the few data set on different pools (1K/4K/1M/1T). Extend the pools to all target at once.Verify the data integrity after pool extension donePool Test

Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by oneVerify the data integrity after pool extension donePool Test
Pool ExcludeExclude the target from PoolCreate the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target
at once.Add target to pool and verify the data integrity after pool excludedCreate the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target
one by one.
Add target to pool and verify
Verify the data integrity after pool excluded.Rebuild
NVMe rebuildSingle drive RebuildUse 4 server minimum and load 50% of Drives.

Shutdown the single server or Eject and make sure the data is getting rebuilt on another NVMe drive.

Verify the data integrity after NVMe rebuild.

Rebuild
ObjectCreate the large number of
object update
objects Update/fetch with different Object ID in single pool created on NVMeVerify the objects are getting created and data are not corruptedSaurabh (I/O)
Create the large number of
object
objects in multiple
pool created on
poolsUse different pool size for NVMe (Pools size 1M/1G/1T). Use the different record sizes in objects (single/Array) Verify the objects are getting created and data are not corruptedSaurabh (I/O)
Trim TestingVerify Trim works after certain IONeed to find a way to do only trim operation on drives.TBDunit tests
Anchor
performance
performance
Performance
Compare the DAOS performance

SPDK can be used to measure the raw performance

Run performance utility (TBD) without DAOS and IOR with DAOS

of the drive.

daos_perf VOS raw performance

daos_perf Echo mode

daos_perf DAOS mode.

Sizes To be covered for Performance: 4K,16K,1M,8M,32M.

Performance measurement for Read/Write and IOPS.

daos_perf testRun daos_perf test with VOS and with DAOS

It's better to get all this different number in single graph/text file for different sizes for better comparison.

Performance Tests
Verify performance Degradation over time.

Create pool, update to fill the pool with many container, Destroy the container. Different IOR with different sizes can be run in parallel.

Run single IOR job to validate the performance number in-between.

This need to be done for hours/days (Fill the container delete the container).

Measure the performance throughout and at the end to validate it's not dropped at any point.

We need to measure performance degradation over time. This will be a good exercise to verify whether our TRIM support is efficient (require 3DNAND)


Soak Testing
With more Fragmentation

1> Create the small data set with Epoch1, 2, 3

2> Discard 3rd Epoch

3> Run single IOR to verify the performance.

Do the above steps for ~10000 times. Make sure at the end the discarded fragmentation is not creating the issue for new write and even performing up to the level for same IOR run in-between.

Use smaller sizes for NVMe (4K-32K) and for SCM (8Bytes-4K) to have more fragmentation.

Soak testing
Metadata

Create the pool of small size (NVMe size 1G)

Run Mdtest to fill the Pool with metadataAfter fool


We need to create the small number of files using IOR.

Direct API can be used but use the different Epoch for each record (Check the DAOS_MD_CAP environment variable This can be setup low for testing purpose.)




Once metadata is full it should not allow any data to be written on pool Even if NVMe has the space but it does not have the metadata. (Look at the defect

Jira Legacy
serverSystem JIRA
serverIdf325724b-f7c9-34db-bd1c-69d12ec98a69
keyDAOS-1936
)

In future pool query can be available to get the Metadata information. 

Run Mdtest to fill the Pool with metadata in future when POSIX support is available.

See if we can maintain 6% of metadata with load. (mean reserving some percentage (6%) of metadata capacity up front and making sure it cannot be taken while the system is operating, taking up no more than the remaining 94% of pool metadata SCM capacity) This feature is not available as of March 2020 and planned in next release mostly in 1.2

unit tests (Jeff has
the space ? 
tool to estimate the metadata space)

Jira Legacy
serverSystem JIRA
serverIdf325724b-f7c9-34db-bd1c-69d12ec98a69
keyDAOS-1512

Validate lots of small I/Os followed by a lot of metadata operations

(1)test_metadata_fillup: Test to verify no IO happens after metadata is full

(2)test_metadata_addremove: Verify metadata release the space after container delete.

(3)test_metadata_server_restart: Verify 2000 IOR small size container after server restart. Test will write IOR in 5 different threads for faster execution time. Each thread will create 400 (8bytes) containers to
the same pool. Restart the servers, read IOR container file written previously and validate data integrity by using IOR option "-R -G 1".



server/metadata.py

tags: pr,hw,small,

tags=server,metadata,metadata_fillup

metadata_addremove

metadata_ior


Jira Legacy
serverSystem JIRA
serverIdf325724b-f7c9-34db-bd1c-69d12ec98a69
keyDAOS-4858

Verify container can be successfully deleted when the storage pool is full ACL grant/remove modification

(4)test_container_removal_after_der_nospace: Verify container can be successfully deleted when the storage pool is full ACL grant/remove modification.


server/metadata.py

tags: pr,hw,small,

tags=server,metadata, metadata_der_nospace











SoakRunning multiple application Thread in parallel for daysUse different workload operation running in parallel. (Write/Read/Delete) in loop for days.Other events can be included as it grows, like pool exclude, disk ejection, rebuild, replica and so... The test flow will be designed once we started writing the Soak testing.Soak
Control Plane/Management for NVMe
  • NVMe SSD discovery with "discover" bindings
  • NVMe SSD burn-in with "burnin" bindings
  • NVMe SSD configuration with  "configuration" bindings
TBDTBDConfigure (Amanda)
Control Plane/Management for NVMePro-active action based on telemetry data (rebalancing)
  • evicting SSD based on high temperature
  • wear-leveling data
TBDTBDConfigure (Amanda)
Control Plane/Management for NVMe
  • SSD firmware image update
TBDTBDConfigure (Amanda)