Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 21 Next »

Hardware Configuration 

ItemDescription
FabricPreferred PSM2
Number of Servers2 to 6
Drives per serversWe have few system which has multiple drives so run first IO tests with 2 NVMe per servers
Number of Client4 to 8 and use 32 process per CN.
daos_nvme.conf

Use the default option for now.

[Nvme]
TransportID "trtype:PCIe traddr:0000:81:00.0" Nvme0
TransportID "trtype:PCIe traddr:0000:82:00.0" Nvme1
TimeoutUsec 0
ActionOnTimeout None
AdminPollRate 100000
HotplugEnable No
HotplugPollRate 0

Tests:

TestConditionData InputComments
UnitTest
src/vos/vea/tests/vea_ut and src/eio/smd/tests/smd_ut
Exiting Functional Test
Identify and run existing functional test cases with NVMeManual Test is in Progress, once it's done will update the test case information here.
I/OCreate the Pool with small NVMe size (0/1G/48G)Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe

Write/Read data (>4K) 1M/16G [Random/Sequential] and make sure it use NVMe
Create the Pool with Large NVMe size (1TB/2TB)Write/Read data (<4K) 1B/1K/4K [Random/Sequential] and make sure it does not use NVMe

Write/Read data (>4K) 1M/16G/1TB [Random/Sequential] and make sure it use NVMe
I/O with Server/System restartCreate the Pool with small NVMe size (0/1G/48G)Write/Read data (<4K) 1B/1K/4KWrite the data/ Stop Server/Start Server/Read data back and check data integrity.

Write/Read data (>4K) 1M/16G Write the data/ Reboot the node/ Start server/read data back and check data integrity.
Create the Pool with Large NVMe size (1TB/2TB)Write/Read data (<4K) 1B/1K/4KWrite the data/ Stop Server/Start Server/Read data back and check data integrity.

Write/Read data (>4K) 1M/16G Write the data/ Reboot the node/ Start server/read data back and check data integrity.
Create the Pool with Large NVMe size (1TB/2TB)Write single IOR data set, Read single IOR data setKill the server while IO is doing Write and start the server. IO should continue after server start? Do the same when read is in progress
Create the Pool with Large NVMe size (1TB/2TB)Write multiple IOR data-sets, Read multiple IOR data-sets, Read-Write TogetherKill the server while IO is doing multiple write and start the server. IO should continue after server start? Do the same when Multiple Read is in progress.
Large number of Pool with Server/System restartCreate the large number of pools (10000) with different NVMe sizesWrite mixed data across all the pools (1K/4K/1M/1T)Write the data/ Stop Server/Start Server/Read data back and check data integrity. 
Write the data/ Reboot the node/ Start server/read data back and check data integrity.
Pool CapacityCreate the NVMe pool size 1GBWrite IO > 1GB which should failed with ENOM SPACE
Create pool same size as NVMe drive
Write IO till pool is getting filled up, once the Drive is full it should not allow to write more data with ENOM_SPACE
Create the pool with maximum of NVMe size and delete.
Run this in loop for example if NVMe is 2TB, create the pool size of 1TB, 500GB, 500GB, delete all the pools. Do this in a loop and make sure pool creation work and size can be reclaimed.
Pool ExtendExtend the single pool to multiple targetsCreate the few data set on single pool (1K/4K/1M/1T). Extend the pool to all target at once.Verify the data integrity after pool extension done

Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by oneVerify the data integrity after pool extension done
Extend the multiple pools to targetsCreate the few data set on different pools (1K/4K/1M/1T). Extend the pools to all target at once.Verify the data integrity after pool extension done

Create the few data set with single pools (1K/4K/1M/1T). Extend the pool target one by one, for example 6 server so pool created with 2 and start extending the pool to 4 servers one by oneVerify the data integrity after pool extension done
Pool ExcludeExclude the Pool from targetCreate the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target at once.Add target to pool and verify the data integrity after pool excluded

Create the few data set on different pools (1K/4K/1M/1T). Exclude the pools from all target one by one.Add target to pool and verify the data integrity after pool excluded
ObjectCreate the large number of object in single pool created on NVMe
Verify the objects are getting created and data are not corrupted
Create the large number of object in multiple pool created on NVMe (Pools size 1M/1G/1T)
Verify the objects are getting created and data are not corrupted
PerformanceCompare the DAOS performanceRun performance utility (TBD) without DAOS and IOR with with DAOS.Performance measurement for Read/Write and IOPS.
MetadataCreate the pool of small size (NVMe size 1G)Run Mdtest to fill the Pool with metadataAfter fool it should not allow any data to be written on pool Even if NVMe has the space ? 
Control Plane/Management for NVMe
  • NVMe SSD discovery with "discover" bindings
  • NVMe SSD burn-in with "burnin" bindings
  • NVMe SSD configuration with  "configuration" bindings
TBDTBD
Control Plane/Management for NVMePro-active action based on telemetry data (rebalancing)
  • evicting SSD based on high temperature
  • wear-leveling data
TBDTBD
Control Plane/Management for NVMe
  • SSD firmware image update
TBDTBD
  • No labels