Hardware Configuration
Item | Description |
---|---|
Fabric | Preferred PSM2 |
Number of Servers | 2 to 6 |
Drives per servers | We have few system which has multiple drives so run first IO tests with 2 NVMe per servers |
Number of Client | 4 to 8 and use 32 process per CN. |
daos_nvme.conf | Use the default option for now. [Nvme] |
Minimum Pool Size | 1G |
Tests:
Test | Condition | Data Input | Comments |
---|---|---|---|
UnitTest | src/vos/vea/tests/vea_ut and src/eio/smd/tests/smd_ut | ||
Exiting Functional Test | Identify and run existing functional test cases with NVMe | Manual Test is in Progress, once it's done will update the test case information here. | |
I/O | Create the Pool with small NVMe size (0/1G/48G) | Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe | |
Write/Read data (>4K) 1M/16G [Random/Sequential] and make sure it use NVMe | |||
Create the Pool with Large NVMe size (1TB/2TB) | Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe | ||
Write/Read data (>4K) 1M/16G/1TB [Random/Sequential] and make sure it use NVMe | |||
I/O with Server/System restart | Create the Pool with small NVMe size (0/1G/48G) | Write/Read data (<4K) 1B/1K/4K | Write the data/ Stop Server/Start Server/Read data back and check data integrity. |
Write/Read data (>4K) 1M/16G | Write the data/ Reboot the node/ Start server/read data back and check data integrity. | ||
Create the Pool with Large NVMe size (1TB/2TB) | Write/Read data (<4K) 1B/1K/4K | Write the data/ Stop Server/Start Server/Read data back and check data integrity. | |
Write/Read data (>4K) 1M/16G | Write the data/ Reboot the node/ Start server/read data back and check data integrity. | ||
Create the Pool with Large NVMe size (1TB/2TB) | Write single IOR data set, Read single IOR data set | Kill the server while IO is doing Write and start the server. IO should continue after server start? Do the same when read is in progress | |
Create the Pool with Large NVMe size (1TB/2TB) | Write multiple IOR data-sets, Read multiple IOR data-sets, Read-Write Together | Kill the server while IO is doing multiple write and start the server. IO should continue after server start? Do the same when Multiple Read is in progress. | |
Large number of Pool with Server/System restart | Create the large number of pools (10000) with different NVMe sizes | Write mixed data across all the pools (1K/4K/1M/1T) | Write the data/ Stop Server/Start Server/Read data back and check data integrity. |
Write the data/ Reboot the node/ Start server/read data back and check data integrity. | |||
Pool Capacity | Create the NVMe pool size 1GB | Write IO > 1GB which should failed with ENOM SPACE | |
Create pool same size as NVMe drive | Write IO till pool is getting filled up, once the Drive is full it should not allow to write more data with ENOM_SPACE | ||
Create the pool with maximum of NVMe size and delete. | Run this in loop for example if NVMe is 2TB, create the pool size of 1TB, 500GB, 500GB, delete all the pools. Do this in a loop and make sure pool creation work and size can be reclaimed. | ||
Pool Extend | Extend the single pool to multiple targets | Create the few data set on single pool (1K/4K/1M/1T). Extend the pool to all target at once. | Verify the data integrity after pool extension done |
Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by one | Verify the data integrity after pool extension done | ||
Extend the multiple pools to targets | Create the few data set on different pools (1K/4K/1M/1T). Extend the pools to all target at once. | Verify the data integrity after pool extension done | |
Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by one | Verify the data integrity after pool extension done | ||
Pool Exclude | Exclude the target from Pool | Create the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target at once. | Add target to pool and verify the data integrity after pool excluded |
Create the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target one by one. | Add target to pool and verify the data integrity after pool excluded | ||
Object | Create the large number of object in single pool created on NVMe | Verify the objects are getting created and data are not corrupted | |
Create the large number of object in multiple pool created on NVMe (Pools size 1M/1G/1T) | Verify the objects are getting created and data are not corrupted | ||
Performance | Compare the DAOS performance | Run performance utility (TBD) without DAOS and IOR with DAOS. | Performance measurement for Read/Write and IOPS. |
Metadata | Create the pool of small size (NVMe size 1G) | Run Mdtest to fill the Pool with metadata | After fool it should not allow any data to be written on pool Even if NVMe has the space ? |
Control Plane/Management for NVMe |
| TBD | TBD |
Control Plane/Management for NVMe | Pro-active action based on telemetry data (rebalancing)
| TBD | TBD |
Control Plane/Management for NVMe |
| TBD | TBD |