 
                 
                                     
                            Based on the 5th generation Lenovo Neptune direct water-cooling platform, this server node provides a total of 24x DDR5 memory modules to deliver up to 3TB of RAM when all slots are populated with 128GB TruDDR5 RDIMMs running at up to 4800MHz. Under the hood, the system can be outfitted with either 2x 2.5” NVMe SDDs, with a 7mm height, or 1x 2.5” NVMe SSD, with a 15mm height. Similarly, the system also supports a single liquid cooled M.2 NVMe SSD to boot the OS or for more storage options. This high-density server has 4x NVIDIA H100 Tensor Core GPUs that are interconnected through NVLink delivering HPC performance improvements, AI training, and inference workloads. Users can apply this technology to every major deep learning framework including chemistry like Gaussian and GROMACS, finite elements like LS-DYNA and Simulia Abaqus, Molecular Dynamics like NAMD and AMBER, and more.
 
					
Two onboard Ethernet interfaces are featured on this chassis including 2x 25GbE SFP28 LOM, which are 1Gb, 10Gb or 25Gb capable, and 1x 1GbE RJ45, which supports NC-SI. This machine does not support PCIe slots but does support high-speed GPU direct networking with dual InfiniBand NDRx2 800Gb/s connections. The highest levels of performance and efficiency in data center cooling are offered by the ThinkSystem SD665-N V3 server tray and DW612S enclosure with direct water cooling. Without going through the CPU or PCIe switches, direct GPU-to-GPU communication is made possible by the embedded networking chips. This allows for maximum scaling, ranging from a single rack with a single chassis and tray to a full sustained-Exaflop system accomplished in less than 200 racks and less than 6,000 nodes.
 
The DW612S enclosure supports N+1 power redundancy for maximum uptime. An XClarity Controller (XCC) in each server node enables local and remote management of the server node and can be upgraded to support XCC Platinum. A System Management Module (SMM)at the enclosure level provides a single connection to daisy-chain across up to 7x enclosures and 84 server nodes reducing switch needs.