NVME namespace CRUD and discovery support

NVMe controllers can provide [1][2] multiple independent LBA regions on a single flash device using NVMe namespaces. Namespaces can be then partitioned using MBR or GPT markup.

From what I understand having separate namespaces instead of just partitioning can have performance benefits depending on NVMe controller implementation and application architecture.

The use-case that we have is sharing a single NVMe for multiple purposes (bcache, ceph journal or wal & db) which is currently done with partitions.

# 1 device (NVMe controller), 3 namespaces
# ...
nvme0n1 259:0 0 20G 0 disk
└─nvme0n1p1 259:1 0 20G 0 part /
nvme0n2 259:2 0 1G 0 disk
nvme0n3 259:3 0 1G 0 disk


sudo nvme list-ns -a /dev/nvme0
[ 0]:0x15ad15ad
[ 1]:0x61574d56
[ 2]:0x4e206572
[ 3]:0x2d454d56
[ 4]:0x30303030
[ 6]:0x61774d56
[ 7]:0x56206572
[ 8]:0x75747269
[ 9]:0x4e206c61
[ 10]:0x20654d56
[ 11]:0x6b736944
[ 16]:0x302e31
[ 18]:0x56500000
[ 19]:0x800
[ 64]:0x3000000
[ 65]:0x30003
[ 128]:0x4466
[ 129]:0x3

As a side note, currently lshw does not list namespaces as separate block devices (just the controller):

[1] https://nvmexpress.org/wp-content/uploads/NVM-Express-1_2a.pdf
“1.6.20 namespace - A quantity of non-volatile memory that may be formatted into logical blocks. When formatted, a namespace of size n is a collection of logical blocks with logical block addresses from 0 to (n-1).”

[2] https://sata-io.org/system/files/member-downloads/NVMe%20and%20AHCI_%20_long_.pdf

"Another important feature of the NVMe interface is its ability to support the partitioning of the physical storage extent into multiple logical storage extents, each of which can be accessed independently of other logical extents. These logical storage extents are called Namespaces. Each NVMe Namespace may have its own pathway, or IO channel, over which the host may access the Namespace…

The ability to partition a physical storage extent into multiple logical storage extents and then to create multiple IO channels to each extent is a feature of NVMe that was architected and designed to allow the system in which it is used to exploit the parallelism available in upper layers of today’s platforms and extend that parallelism all the way down into the storage device itself. Multiple IO channels that can be dedicated to cores, processes or threads eliminate the need for locks, or other semaphore based locking mechanisms around an IO channel. This ensures that IO channel resource contention, a major performance killer in IO subsystems, is not an issue."