Submitter: Stephen Pu from H3C
Reviewer: Johann Lombardi (Deactivated) Liang Zhen and other team members from Daos
Status: Need more specification and under review
Expected result:
Request item scope is defined for Daos and H3C collaboration before Q4.2022.
Scope may divide into 2~3 sub iterations.
1. Request priority defined and aligned.
2. Request specified for Product definition.(NOT design stage)
3. General feasibility and estimation could be given.
Scenario | Description | H3C Proposal | Daos feedback | Priority | If contributed to community | Feasibility and effort estimation | Owner(i.e. who own design who own dev and testing) | Delivery plan (Q1, Q2, Q3 in 2022) | Risk and other comments/concern |
| User can create a block storage volume by specific size, name and others attributs |
| |||||||
| User can delete a block storage volume by specific volume name or uuid |
| |||||||
online, offline | The volume size can be expanded in the IO operation service of online or offline cases. |
| |||||||
| The specific volume could be searched by their volume name or uuid |
| |||||||
| The volume presented appears to be the full provisioned capacity to the application servers, but nothing has been allocated until write operations occur. |
| |||||||
| With thick provisioning, the complete amount of virtual disk storage capacity is pre-allocated on the physical storage when the virtual disk is created. A thick-provisioned virtual disk consumes all the space allocated to it in the datastore |
| |||||||
| The volume's atributes can by modified, like: name |
| |||||||
| Recycle Bin is a snapshot recovery feature that enables you to restore accidentally deleted |
| |||||||
| You can back up the data on your volumes by taking point-in-time snapshots. |
| |||||||
| A clone of a Block Storage volume is a copy made of an existing volume at a specific moment in time. |
| |||||||
| You can use quality of service (QoS) to guarantee that performance of critical workloads is not degraded |
| |||||||
| Use NVMe over Fabric protocol to setup a NVMe block storage. |
| |||||||
Automatic cluster deployment |
| YES | |||||||
Automatic cluster installation |
| YES | |||||||
rollback |
| YES | |||||||
|
| YES | |||||||
online upgrade |
| YES | |||||||
offline upgrade |
| YES | |||||||
rollback |
| YES | |||||||
online monitoring |
| YES | |||||||
offline monitoring |
| YES | |||||||
|
| YES | |||||||
|
| YES | |||||||
|
| YES | |||||||
block storage pool | The user can ceate a specific block storage pool with size or name. |
| |||||||
online | The specific block storage pool can be expand its size by no interupt with their operation (online case) |
| |||||||
Add new node (online) | The cluster can add a new node server to existing cluster systerm without interupt cluster's operation (online case) |
| |||||||
delete a node (online) | The cluster can delete a new node server to existing cluster systerm without interupt cluster's operation (online case) |
| |||||||
Add a new SSD (online) | The node server can add a new NVMe SSD without interupt cluster's operation (Online case) |
| |||||||
Replace a SSD (online) | The node server can replace a new NVMe SSD with original one. (online case) |
| |||||||
remove exist SSD (online) | The node server can remove a running or failed NVMe SSD in system. (Online case) |
| |||||||
Add a new PMEM (online) | The node server can add a new PMEM without interupt cluster's operation (Online case) |
| |||||||
Replace a PMEM (online) | The node server can replace a newPMEM with original one. (online case) |
| |||||||
remove exist PMEM (online) | The node server can remove a running or failed PMEM in system. (Online case) |
| |||||||
Network exception robust | The cluster could keep providing IO operation service in the random network failiure case. |
| |||||||
NIC exception robust | The cluster could keep providing IO operation service in the random node's NIC failiure case. |
| |||||||
| SSD Disk Exception is the test driven case |
| |||||||
| Node Exception is the test driven case |
| |||||||
PMEM space optimazation | Currently, the PMEM size is divided with the amount of NVMe SSD, it caused a big PMEM space waste. |
| |||||||
|
|
| |||||||
4TB/H | The data rebuild speed above 4TB/H, in the cluster of 3 nodes each node has 8 NVMe SSD. |
| |||||||
IO recovery time | When a node rebooted, how many seconds the IO service could be recovery back to the 100% before it's reboot. |
|