KEYSIGHT SETS NEW BENCHMARK FOR SILICON VALIDATION OF AI ETHERNET SWITCHING AT 51.2T
The milestone testbed was comprised of Keysight's AresONE-M 800GE and the Marvell Teralynx 10 Ethernet programmable switch device.
Keysight Technologies has achieved a new milestone in validating the accelerated infrastructure necessary to power the next-generation AI-driven cloud data centre with Keysight's AresONE-M 800GE test platform and the Marvell® Teralynx® 10 Ethernet switch chip.
AI networks require high performance, unprecedented scale, and low-latency switching fabric to ensure seamless handling of massive volumes of data and real-time decision-making that advanced AI workloads demand. Keysight collaborated with Marvell Technology to address the unique testing needs of network equipment manufacturers, silicon chipset vendors, and data centre operators as they equip, build, and manage high-speed 800GE and 1.6T networks. Implementing an effective validation strategy helps mitigate the risk of costly product re-designs, recalls, or negatively impacting customer loyalty by assuring solutions will perform as designed and promised in real-world environments.
The milestone testbed was comprised of Keysight's AresONE-M 800GE, the industry's most comprehensive multi-speed Ethernet performance test platform and traffic generator, and the Marvell Teralynx 10 Ethernet programmable switch device, which offers industry-leading low latency, supporting up to 51.2 Tbps throughput with comprehensive analytics.
Highlights from the validation include:
- High throughput and high speed –The Keysight AresONE-M 800GE generated 51.2 Tbps of traffic, sending it successfully through the Teralynx 10 switch to test its limits. The test validated an 800GE interface speed based on 112G SerDes, which facilitates faster data transfer between devices and extended reach in data centre interconnects or telecommunications networks that run data-intensive AI applications.
- Unprecedented scalability – Eight AresONE-M 8-port chassis were chained together, providing an industry-first 64 x 800GE link configuration to achieve a high-scale test bed running at 800GE line rate.
- Low latency – Low latency is critical for achieving the shortest job completion time for AI training and other highly distributed applications. These AI workloads depend on the switching fabric to provide the lowest possible latency —and to do so predictably.
- Performance analysis – Beyond the need for high bandwidth and low latency, measuring the performance of all 64x800GE links was an important aspect of the testbed, providing deeper, actionable analytics around loss, latency, and jitter at line rate. These valuable insights will help accelerate innovation for data centres deploying future AI applications.