Zoned Storage Addresses Storage Scale Storage for data centers today consists of hard disk drives (HDDs) and flash-based solid state drives (SSDs). These devices have evolved from the initial computer architecture interfaces such as SCSI (Small Computer System Interface), SAS (Serial Attached SCSI) and SATA (Serial Advanced Technology Attachment). HDDs appeared to the host as contiguous sets of blocks, when in fact they were organized in zones and data was written and mapped to various physical sectors. When SSDs were first introduced, because they were so much faster than HDDs the controller inside the SSD device allowed the host to write data with virtually no restrictions. Inside the SSD, the controller managed erasing, organizing, moving, writing and so on. As the growth of data surged, so did the demands of storage. The existing implementations do not scale because of all the overhead burden that HDDs and SSDs are required to handle. To enable storage devices to scale effectively requires a new framework for the data center and cloud providers. One approach is Zoned Storage, an open-source, standards-based initiative introduced by Western Digital to enable data centers to scale efficiently for the zettabyte storage capacity era. For both HDDs and SSDs, Zoned Storage is comprised of open standards, which includes ZBC (Zoned Block Commands) and ZAC (Zoned ATA Command Set) for SMR (Shingled Magnetic Recording) HDDs and Zoned Namespaces (ZNS) for NVMe™ SSDs. At a high level, these standards partition the storage device as it physically is and the host software organizes the data before it is stored. By leveraging Zoned Storage, both HDDs and SSDs can support higher densities, increase endurance and lower TCO for data centers and cloud providers. Next let’s focus on the main memory challenge of the existing architectures.
Next Generation Memory Architecture – OmniXtend™
Because main memory is controlled by the CPU, today’s system architecture is required to conform to its interfaces. This effectively fixes the ratio of memory-to-compute in any practical system, which is an impediment to scaling many memory-centric applications. There are various attempts to circumvent this limitation, but they all have drawbacks. For example, the use of Remote Direct Memory Access (RDMA) architectures requires software to manage the moving of bits from non-volatile storage into and out of main memory, as well as more software to synchronize the distant copies — in other words to provide coherence to the programmer. The software and network infrastructure needed is burdensome and costly. Several new technologies are enabling architects to redo memory-centric computing. The first is the emergence of higher density, byte-addressable nonvolatile memories. These are quickly becoming cost-competitive to dynamic random access memory (DRAM). These new memories can become a new type of main memory. The second advancement is the growth of the programming language P4 and its use in dataplane-programmable Ethernet switches. This new level of flexibility allows architectures to use low-cost Ethernet hardware with completely new protocols. Lastly, the acceptance and openness of RISC-V. RISC-V is an open instruction set which has spawned numerous different processor microarchitectures. Many of these implementations are open-sourced, including the buses and messaging required for multiple CPUs to share cache and main memory. The cache coherency bus ensures that all hosts see a synchronized vision of the main memory they share. To enable a cache coherent memory-centric architecture requires the sharing of the cache coherency bus among all existing and future devices which would access main memory. With the existing proprietary ecosystems, such as x86 and Arm®, the cache coherency bus is closed. With RISC-V, however, there are open implementations of on-chip cache coherency buses. With the bus specification open and unencumbered, main memory can be unleashed and shared between heterogeneous system components. See an example design in figure 1.


