Logo

Rambus introduces PCIe® 7.0 Switch IP with Time Division Multiplexing

Rambus PCIe® 7.0 Switch IP with Time Division Multiplexing enables efficient, scalable PCIe fabrics that optimize link utilization and reduce system complexity for scale up and scale out of distributed AI clusters and high-performance computing networks.

Rambus has introduced the Rambus PCIe® 7.0 Switch IP with Time Division Multiplexing (TDM), a new addition to its advanced interconnect IP portfolio designed to address the rapidly escalating bandwidth, latency, and scalability requirements of AI, cloud, and high-performance computing (HPC) systems. 

As AI infrastructure grows in scale and architectural complexity, system designers are increasingly challenged to move massive volumes of data efficiently across CPUs, GPUs, accelerators, and NVMe storage. The Rambus PCIe 7.0 Switch IP with TDM is architected to help meet these demands by enabling more flexible and efficient utilization of PCIe links, supporting emerging disaggregated and pooled compute architectures while maintaining low latency and deterministic performance. 

Rambus PCIe 7.0 Switch IP with TDM Optimized for Next-Generation AI and Data Center SoCs 

Built on the PCIe 7.0 specification, the Rambus newest switch IP is optimized for next‑generation AI and data center SoCs that require extreme bandwidth density, advanced traffic management, and seamless scalability. By incorporating TDM capabilities, the switch enables designers to intelligently schedule and multiplex traffic across shared links, helping maximize fabric utilization while supporting diverse workload profiles, from large‑scale AI training to latency‑sensitive inference and data movement. 

“The acceleration of AI is fundamentally reshaping system architectures, and it’s no longer sufficient to simply add more lanes or more endpoints,” said Simon Blake‑Wilson, senior vice president and general manager of Silicon IP at Rambus. “With our PCIe 7.0 Switch IP with TDM, Rambus is giving system architects a new degree of freedom to scale bandwidth efficiently and deterministically, while reducing complexity and improving overall system utilization. This is a critical enabler for scale up and scale out of the next wave of advanced AI clusters and HPC networks.” 

“AI infrastructure is increasingly defined by how efficiently data can move between heterogeneous compute and memory resources,” said Jeff Janukowicz, VP, Semiconductors and Enabling Technologies. “Advanced PCIe switching technologies that improve link utilization and enable flexible traffic orchestration will be key to building scalable, cost‑effective AI platforms as next‑generation interconnect technology evolves.” 

Integration of Edwin AI with IBM watsonx and Red Hat Ansible Automation Platform aims to unlock...
Acquisition will expand Cognizant's AI builder technology stack with production-grade AI operations...
A new Hoare Lea report finds that data centre planning applications are being delayed by an average...
Ellis Patents has launched the Hercules Heavy Duty Cable Hanger for use in data centres, aimed at...
Exploring the UK's focused strategy to accelerate data centre growth amidst AI and cloud storage...
Kao Data partners with Discover Tech, aiming to provide immersive tech sector experiences for young...
A demonstration of hydrogen-fuelled engines has been completed as part of testing for data centre...
European organisations confront a costly inefficiency in their cloud-first strategies, affecting AI...