Abstract particle landscape on dark background
Compute Express Link or CXL is dramatically changing the way memory is used in computing systems. Tutorials at the IEEE Hot Chips Conference and the recent SNIA Storage Developers Conference explored how CXL works and how it will change the way we do computing. Additionally, recent announcements by Colorado startup IntelliProp of its Omega Memory Fabric chips pave the way for the implementation of CXL to enable memory pooling and composable infrastructure.
The initial applications for CXL were for memory expansion for individual CPUs, but CXL will have the greatest impact on the sharing of many different types of memory technologies (DRAM and persistent memory) between CPUs. The image below (from the CXL hot chips tutorial) shows the different ways memory can be shared with CXL.
Used for CXL-connected storage
As Yang Seok Ki, VP of Samsung Electronics said at SNIA SDC, CXL is an industry-supported cache-coherent interconnect for processors, memory expansion and accelerators. CXL versions 1.0 and 2.0 were released (working with PCIe 5.0) and in early August at Flash Memory Summit, CXL version 3.0 was released, which works with the faster PCIe 6.0 connection. CXL 3.0 also enables multi-level switching and storage fabrics, as well as direct peer-to-peer storage access.
The presentation also outlined how CXL enables version 2.0 enabled media storage available to a CPU over a local CXL connection and remote storage over a CXL version 3.0 switched network, as shown below.
CXL enables near, mid and far storage
Local memory is directly connected to the CPU. Some of the first available CXL products are mid-range memory expansion products that provide additional memory to a CPU. CXL opens the door to storage tiering and offers similar performance and cost tradeoffs as storage tiering.
IntelliProp just announced their Omega Memory Fabric chips. The chips incorporate the CXL standard along with the company’s fabric management software and network attached memory (NAM) system. IntelliProp also announced three FPGA (Field Programmable Gate Arrays) products that incorporate its Omega Memory Fabric. The company says its storage-agnostic innovation will help the adoption of composable storage, leading to significant improvements in data center energy consumption and efficiency. The company says its Omega Memory Fabric has the following characteristics:
Omega Memory Fabric features that include the CXL standard
- Dynamic multipathing and memory allocation
- E2E security with AES-XTS 256 with added integrity
- Supports non-tree topologies for peer-to-peer
- Management scale for large deployments with multi-fabrics/subnets and distributed managers
- Direct memory access (DMA) enables efficient data movement between memory tiers without tying up CPU cores
- Memory independent and up to 10x faster than RDMA
The three FPGA solutions connect CXL devices to CXL hosts and are an adapter, a switch and a fabric manager. IntelliProp says ASIC solutions will be available in 2023. The company says the solutions connect CXL devices to CXL hosts and enable data centers to increase performance, scale across tens to thousands of host nodes, use less power because data is transmitted with fewer hops, and enable mixed use shared DRAM (faster memory) and shared SCM (slower memory).
CXL is poised to change the way memory is used in computing architectures, according to a 2022 Hot Chips tutorial and speaking engagements at SNIA SDC. IntelliProp showcased the company’s open memory fabric technology and three FPGA solutions to enable CXL-ready memory fabrics.