Fixed
Details
Assignee
Tom NabarroTom NabarroReporter
Michael HenneckeMichael HenneckePriority
P2-HighAffects versions
Required for Version
Fix versions
Components
Patch URL
Story Points
5
Details
Details
Assignee
Tom Nabarro
Tom NabarroReporter
Michael Hennecke
Michael HenneckePriority
Affects versions
Required for Version
Fix versions
Components
Patch URL
Story Points
5
Created February 9, 2022 at 8:58 AM
Updated February 16, 2023 at 9:03 AM
Resolved April 8, 2022 at 12:22 PM
All PMem devices connected to the same Xeon CPU socket are combined to form a single device in AppDirect-interleaved mode (using
ipmctl
). We currently create one namespace on this device, and then mount this single/dev/pmemX
to be used by the single engine that is running on that CPU socket.In order to more efficiently use next-generation Xeon processors (SPR and beyond) with current-generation (200Gbps) HPC fabric links, we will need to support more than 1 HPC fabric links per CPU socket. We therefore need a mechanism to run two (or more) engines per CPU socket (as libfabric does not support striping across multiple interfaces). To do this, we will need to create two (or more) SCM devices per CPU socket.
Implementation proposal: Provide a
-S | --scm-namespaces N
option to thedaos_server storage prepare
command, with a default of 1, and the ability to request 2 SCM namespaces per CPU socket.