Cloud native EDA tools & pre-optimized hardware platforms
Gerry Fan, CEO of XConn, co-authored this blog post.
In a world where AI chatbots answer complex queries in seconds and high-definition video streams from our smartphones, data centers are providing the essential backbone. With accelerating demands for bandwidth, data centers are undergoing a transformation, trending toward disaggregated architectures and workloads running on accelerators. Making these approaches possible is Compute Express Link? (CXL?), the CPU-to-device, cache-coherent interconnect for processors, memory expansion, and AI accelerators.
XConn Technologies is well versed in the critical role that CXL plays in enabling next-generation AI and high-performance computing (HPC) applications. Founded in 2020 in San Jose, California, XConn¡¯s mission is to accelerate AI computing in data centers and HPC via its high-performance, power-efficient, scalable, and cost-effective interconnect solutions.
Recently, XConn achieved first-pass silicon success for its XC50256 CXL 2.0 data center switch SoC using Synopsys CXL 2.0 Controller and PCI Express 5.0 PHY IP on a FinFET process. By implementing CXL, with its built-in support for PCIe in a complex switch, the company is looking to provide a more seamless avenue to CXL adoption. Read on to learn how XConn developed its industry-first CXL switch SoC for data center and memory pooling applications, delivering the highest throughput, lowest latency, and lowest power.
While PCs and laptops used to handle heavy computing workloads, this has shifted to data centers, the workhorses of our increasingly AI-driven digital world. Globally, data creation is anticipated to reach 180 zettabytes by 2025, according to . For the hyperscale data centers managing all this information, high bandwidth and low latency are key cornerstones to keep the digital world turning.
Data center architectures are changing in response to increasing data demands. In the hyperscale world, the trend is toward disaggregation, where homogeneous resources such as compute, storage, and networking are in separate boxes. The boxes are connected via optical interconnects, and a central intelligence unit determines and pulls just what is needed from each of the boxes. This setup frees remaining resources for other workloads.
Disaggregation allows memory pooling, which has become increasingly important in data centers. In a traditional data center architecture, each server has its own set of memory. Any application that runs on this server can access the memory associated with the server, limiting how much memory any particular application can use. Today¡¯s data-driven applications, such as large language models (LLMs) like ChatGPT, are extremely thirsty for memory. Regardless of how much memory is allotted for a given server, applications like LLMs will figure out a way to demand more memory. So, the best way to solve this dilemma is to remove the memory wall and allow sharing and pooling of memory resources among multiple servers. When any program running on a particular server can access the memory pool based on the needs of the application, then overall memory usage becomes more efficient. Even if the request is a large amount of memory, pooled memory has capacity in the hundreds of terabytes, facilitating large memory applications.
In data center architectures that use accelerators, CXL provides the conduit for direct memory access, where accelerators can access the same data as the processor. This approach avoids the need to replicate data across the system. CXL takes care of memory allocation by assigning the appropriate amount of memory to applications in need and then releasing the memory back to the pool once the application has finished with it. This lowers latency and requires less software overhead, so the system is freed up to deliver better die-to-die communication in the multi-die systems that are becoming increasingly popular in data centers.
It¡¯s no wonder why the CXL standard, with its cache coherency and extremely low latency, is quickly gaining traction among data center SoC designers. Its computational offloading capability and interoperability with PCI Express present a broad range of design possibilities. By enabling disaggregation of memory and other peripheral components, CXL is integral to the emerging composable memory architecture. The standard¡¯s power efficiency also is critical in mitigating the energy demands of today¡¯s data centers, which .
XConn¡¯s CXL 2.0 switch SoC delivers twice the bandwidth of its closest competitor¡¯s solution, with real-time or near-real-time responses. As the company designed its switch SoC, it knew that its reliability in enabling disparate devices from different vendors to talk with each other was key. For data centers, reliability, availability, and serviceability (RAS) are critical attributes that apply to every component in the system.
Silicon-proven IP is commonly used to minimize interoperability issues. For its IP, XConn looked to Synopsys because our CXL controller and PHY are widely used in the industry, proven through the experiences of many other companies. In addition to the CXL 2.0 controller, with its RAS feature, and PCIe 5.0 PHY IP, the company also used Synopsys Verification IP for CXL. Working closely with various Synopsys engineering teams, the XConn team felt well supported in their quest to quickly get their CXL switch to market.
XConn is currently working on its next-generation switch design based on CXL 3.0 and PCIe 6.0, which provide double the bandwidth at reduced latency. The company plans to continue working with Synopsys, having selected Synopsys CXL 3.0 and PCIe 6.0 IP for its new design.
With data volumes and demands on data centers both growing at a rapid pace, the CXL standard will continue playing a critical role as data centers evolve. XConn¡¯s CXL switches are key elements in the CXL ecosystem, shaping the future of HPC and AI applications.