Pavilion Data is glad to see established vendors such as Dell-EMC and Pure Storage entering the NVMe-based storage bandwagon. There are two categories of storage devices leveraging NVMe:
- Traditional Shared Storage
- Composable/Disaggregated Storage
Traditional Shared Storage is used for scale-up workloads and usually a replacement for existing SATA or SAS-based All Flash Arrays (AFA) utilizing Fiber Channel or iSCSI protocols. To leverage NVMe, these systems are retrofitted in two ways: 1) media replacement with NVMe, and 2) replace the internal controller-to-drive interconnect bus from SAS/SATA to NVMe. No other change in architecture is usually done. This provides a performance boost but as media becomes faster, traditional AFA designs (typically dual controller architectures) see a shift in the bottleneck from the media to the controllers.
Composable/Disaggregated Storage, on the other hand, is entirely different. For a rack-scale architecture, a storage system has to support the performance needs for all the servers within the rack. As an example, assume 20 servers with 2 PCIe flash cards in them. If a single card can give 500K IOPS, the aggregate performance requirement is 20 million IOPS for the rack in order to deliver the equivalent of Direct Attached Storage. And this performance requirement must conform to an inconspicuous form factor within a rack. What good is bottom of the rack storage when it leaves no space for servers within the rack?
Composable/Disaggregated Storage can be used as a replacement for Traditional Shared Storage, if the connectivity exists, but not the other way.
Just as we saw a movement from scale-up applications to scale-out applications happen primarily to mitigate a compute and storage bottleneck for the modern web-scale applications, a similar architectural move is required to move to a scale-out controller design to allow for faster data processing. This is where Traditional Shared Storage retrofitted with NVMe fail. A faster SAN is not Shared Accelerated Storage. Rather, modern applications and rack scale architecture demand a radically different storage system design that includes controller scale-out to take advantage of the profound developments of NVMe.
With the rate of data growth and the need for everything real-time, a large amount of data needs to be stored, retrieved and processed at an increased performance driving a need end to end parallelism. This includes applications, protocol, network, controllers, and devices. As we see Big Storage roll-out products touting their NVMe-based offerings, one has to ask a few key questions to figure out how they all stack up and whether they are Traditional Shared or Composable/Disaggregated Storage:
5 Questions I would ask before picking a Composable/Disaggregated Storage system.
- Is there any serialization in the stack eg. are you using a serial protocol?
- Is this a controller scale-out architecture or dual controller only?
- Is the controller to device using a serial or parallel protocol?
- How many parallel hosts can be connected without a drop in the performance? Where is the performance knee?
- What is the IOPS/Rack-Unit?