...
Item | Description |
---|---|
Fabric | Preferred PSM2 |
Number of Servers | 2 to 6 |
Drives per servers | We have few system which has multiple drives so run first IO tests with 2 NVMe per servers if possible |
Number of Client | 4 to 8 and use 32 process per CN. |
daos_nvme.conf | Use the default option for now. [Nvme] |
Minimum Pool Size | 1G |
Tests:
Test | Condition | Data Input | Comments | Test Area |
---|---|---|---|---|
UnitTest | src/vos/vea/tests/vea_ut src/eio/smd/tests/smd_ut Look at the DAOS-1246 and include all unit tests. daos_epoch_discard() will also create the more fragmentation. So add unit test specific to that API call. | [daos_epoch_discard() has been removed from the code so unit |
test coed is no more needed] | Unit tests | |||
Exiting Functional Test | Identify and run existing functional test cases with NVMe | Manual Test is in Progress, once it's done will update the test case information here. (Patch is ready https://review.hpdd.intel.com/#/c/33669/ but can not merge until we get solution from DCO team on DCO-8268 ) | SAMIR | |
Hardware | Run Soak testing and regression testing with mix of Apache pass + NVMe (Close to final A21 server memory sizes) | Make sure to account any hardware changes in terms of memory,CPU, drives, fabric. | SOAK Testing | |
I/O | Create the Pool with small NVMe size (0/1G/48G) | Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe Write/Read data with non standard sizes |
4025B,259K,1.1M,30.22M [Random/Sequential] and make sure it |
use NVMe | For second case, data sizes can be generated random instead of predefined fixed size. | Saurabh |
Write/Read data (>4K) 1M/16G [Random/Sequential] and make sure it use NVMe Write/Read data with non standard sizes 4025,259K,1.1M,30.22M [Random/Sequential] and make sure it |
use NVMe | For second case, data sizes can be generated random instead of predefined fixed size. | Saurabh |
Create the Pool with |
Maximum NVMe size |
. | Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe Write/Read data with non standard sizes 4025,259K,1.1M,30.22M [Random/Sequential] and make sure it |
use NVMe | For second case, data sizes can be generated random instead of predefined fixed size. Use the maximum pool size based on NVMe size on each server and number of total servers. | Saurabh |
Write/Read data (>4K) 1M/16G/1TB [Random/Sequential] and make sure it use NVMe Write/Read data with non standard sizes 4025,259K,1.1M,30.22M [Random/Sequential] and make sure it |
use NVMe | For second case, data sizes can be generated random instead of predefined fixed size. | Saurabh | |
Unaligned IO | Try using the offset from API or use the core Python API to modify the existing Array and read through. Write 1M, modified 1bytes in different offsets. read through. | The test code daos_run_io_conf.c will be doing similar thing so worth to use same. | unit tests (Some more recovery tests talk Di) |
I/O with Server/System restart | Create the Pool with small NVMe size (0/1G/48G) | Write/Read data (<4K) 1B/1K/4K | Write the data/ Stop |
Servers/Start |
Servers/Read data back and check data integrity. | Saurabh | |
Write/Read data (>4K) 1M/16G | Write the data/ Reboot the node/ Start |
servers/read data back and check data integrity. After server start SPDK need to be setup again. | Saurabh |
Create the Pool with |
Maximum NVMe size |
. | Write/Read data (<4K) 1B/1K/4K | Write the data/ Stop Server/Start Server/Read data back and check data integrity. Use the maximum pool size based on NVMe size on each server and number of total servers. | Saurabh |
Write/Read data (>4K) 1M/16G | Write the data/ Reboot the node/ Start server/read data back and check data integrity. After server start SPDK need to be setup again. | Saurabh | |
Create the Pool with Maximum NVMe size. | Write single IOR data set Read single IOR data set | Kill the server while IO is doing Write and start the server after 5-10 min. IO should continue after server start? Do the same when read is in progress. Use the maximum pool size based on NVMe size on each server and number of total servers. | Rebuild |
Verify application timeouts when server are powered down | Any data sizes but use more threads to load the data at the same time. | Kill the server and wait till application gets timeout. Need to find out the timeout value to drop RPC connection. | Rebuild |
Create the Pool with Maximum NVMe size. | Write multiple IOR data-sets, Read multiple IOR data-sets, Read-Write Together (Real world scenarios). Use different applications together if available instead of just IOR in different threads . | Kill the server while IO is doing multiple write and start the server after 5-10 min. IO should continue after server start. Do the same when Multiple Read is in progress. | Soak |
Re-written data fetch validation | Write the data on NVMe >4K Re-write using the same array with small size ~1-2 bytes which will go through SCM. Do this and change ~100 bytes with the different data Do fetch which will combined the record and verify it |
. | When overwriting the data will be kind of new epoch entry getting created. But it will use the old data set and update the new bytes value only. During fetch Epoch will aggregated and provide the result with modified bytes. | unit tests |
Re-written data fetch validation | Write the data on SCM <4K Extend the data set using the same array with larger size >8K which will go through NVMe Do this and repeat for few times with the different data Fetch the |
data-set which will combined the records, |
verify it. | unit tests | |||
Verify there is no leak after PMDK Transaction (This can not be tested until we get some hooks to control PMDK transaction) | PMDK may have some API to get the allocated but it wont be easy to use as it is. There is Pool Query which will provide in future which can be track the size to verify memory leak. Example: 1> Create 3G pool 2> Write 1G data and before it goes for transaction commit shutdown the server. 3> Next when server start verify it should claim the full 3G space. | We need fault injection layer to control the PMDK transaction with server start/stop. Which can do server stop before doing PMDK transaction | unit tests( Talk to Jeff/Di) | |
Large number of Pool with Server/System restart | Create the large number of pools (10000) with different NVMe sizes | Write mixed data across all the pools (1K/4K/1M/1T) | Write the data/ Stop Server/Start Server/Read data back and check data integrity. | Pool tests |
Write the data/ Reboot the node/ Start server/read data back and check data integrity. | Pool tests | |||
Pool Capacity | Create the NVMe pool size 1GB | Write IO > 1GB which should failed with ENOM SPACE | Pool tests | |
Create pool same size as NVMe drive | Write IO till pool is getting filled up, once the Drive is full it should not allow to write more data with ENOM_SPACE | Pool tests | ||
Create the pool with maximum of NVMe size and delete. | Run this in loop for example if NVMe is 2TB, create the pool size of 1TB, 500GB, 500GB, delete all the pools. Do this in a loop and make sure pool creation work and size can be reclaimed. | Pool tests | ||
Verify the Fragmentation. | Create the pool with 12GB. Add 8GB size of data on NVMe and 4G on SCM. IO has to be written with different container (~100) with different sizes. use more smaller sizes for NVMe (4K-32K) and for SCM (8Bytes-4K) to have more fragmentation. | Fragmentation information will be needed from pool query in future to validate fragmentation number. | unit tests (Talk to Niu) | |
Pool Extend | Extend the single pool to multiple targets | Create the few data set on single pool (1K/4K/1M/1T). Extend the pool to all target at once. | Verify the data integrity after pool extension done | Pool Test |
Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by one | Verify the data integrity after pool extension done | Pool Test | ||
Extend the multiple pools to targets | Create the few data set on different pools (1K/4K/1M/1T). Extend the pools to all target at once. | Verify the data integrity after pool extension done | Pool Test | |
Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by one | Verify the data integrity after pool extension done | Pool Test | ||
Pool Exclude | Exclude the target from Pool | Create the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target |
one by one. |
Verify the data integrity after pool excluded. | Rebuild | |||
NVMe rebuild | Single drive Rebuild | Use 4 server minimum and load 50% of Drives. | Shutdown the single server or Eject and make sure the data is getting rebuilt on another NVMe drive. Verify the data integrity after NVMe rebuild. | Rebuild |
Object | Create the large number of |
objects | Update/fetch with different Object ID in single pool created on NVMe | Verify the objects are getting created and data are not corrupted | Saurabh (I/O) |
Create the large number of |
objects in multiple |
pools | Use different pool size for NVMe (Pools size 1M/1G/1T). Use the different record sizes in objects (single/Array) | Verify the objects are getting created and data are not corrupted | Saurabh (I/O) | |||||
Trim Testing | Verify Trim works after certain IO | Need to find a way to do only trim operation on drives. | TBD | unit tests | ||||
| Compare the DAOS performance | SPDK can be used to measure the raw performance |
of the drive. daos_perf VOS raw performance daos_perf Echo mode daos_perf DAOS mode. Sizes To be covered for Performance: 4K,16K,1M,8M,32M. | Performance measurement for Read/Write and IOPS. It's better to get all this different number in single graph/text file for different sizes for better comparison. |
Performance Tests | |||
Verify performance Degradation over time. | Create pool, update to fill the pool with many container, Destroy the container. Different IOR with different sizes can be run in parallel. Run single IOR job to validate the performance number in-between. This need to be done for hours/days (Fill the container delete the container). Measure the performance throughout and at the end to validate it's not dropped at any point. | We need to measure performance degradation over time. This will be a good exercise to verify whether our TRIM support is efficient (require 3DNAND) | Soak Testing |
With more Fragmentation | 1> Create the small data set with Epoch1, 2, 3 2> Discard 3rd Epoch 3> Run single IOR to verify the performance. Do the above steps for ~10000 times. Make sure at the end the discarded fragmentation is not creating the issue for new write and even performing up to the level for same IOR run in-between. | Use smaller sizes for NVMe (4K-32K) and for SCM (8Bytes-4K) to have more fragmentation. | Soak testing |
Metadata | Create the pool of small size (NVMe size 1G) |
We need to create the small number of files using IOR. Direct API can be used but use the different Epoch for each record (Check the DAOS_MD_CAP environment variable This can be setup low for testing purpose.) | Once metadata is full it should not allow any data to be written on pool Even if NVMe has the space |
but it does not have the metadata. (Look at the defect In future pool query can be available to get the Metadata information. Run Mdtest to fill the Pool with metadata in future when POSIX support is available. See if we can maintain 6% of metadata with load. (mean reserving some percentage (6%) of metadata capacity up front and making sure it cannot be taken while the system is operating, taking up no more than the remaining 94% of pool metadata SCM capacity) This feature is not available as of March 2020 and planned in next release mostly in 1.2 | unit tests (Jeff has tool to estimate the metadata space) | |||||||||||
| Validate lots of small I/Os followed by a lot of metadata operations (1)test_metadata_fillup: Test to verify no IO happens after metadata is full (2)test_metadata_addremove: Verify metadata release the space after container delete. (3)test_metadata_server_restart: Verify 2000 IOR small size container after server restart. Test will write IOR in 5 different threads for faster execution time. Each thread will create 400 (8bytes) containers to | server/metadata.py tags: pr,hw,small, tags=server,metadata,metadata_fillup metadata_addremove metadata_ior | ||||||||||
| Verify container can be successfully deleted when the storage pool is full ACL grant/remove modification (4)test_container_removal_after_der_nospace: Verify container can be successfully deleted when the storage pool is full ACL grant/remove modification. | server/metadata.py tags: pr,hw,small, tags=server,metadata, metadata_der_nospace | ||||||||||
Soak | Running multiple application Thread in parallel for days | Use different workload operation running in parallel. (Write/Read/Delete) in loop for days. | Other events can be included as it grows, like pool exclude, disk ejection, rebuild, replica and so... The test flow will be designed once we started writing the Soak testing. | Soak | ||||||||
Control Plane/Management for NVMe |
| TBD | TBD | Configure (Amanda) | ||||||||
Control Plane/Management for NVMe | Pro-active action based on telemetry data (rebalancing)
| TBD | TBD | Configure (Amanda) | ||||||||
Control Plane/Management for NVMe |
| TBD | TBD | Configure (Amanda) |