Application Areas
High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI
Technical Details
Supermicro GPU ARS-111GL-DNHR-LCC
2 Compute Nodes
Rackmount, 1 Unit (CSE-GP102TS-R000NDFP)
Power Supply
2x 2700W Redundant Titanium Level (96%) power supplies
Mainboard (per Node)
Super G1SMH (IPMI 2.0, 1GbE BaseT, optional up to 100GbE)
Processors (per Node)
NVIDIA 72-core NVIDIA Grace CPU on GH200 Grace Hopper™ Superchip
Supports CPU TDP up to 2000W (Liquid Cooled)
Memory (per Node)
up to 480GB ECC LPDDR5X onboard Memory
up to 96GB ECC HBM3 GPU Memory
GPU (per Node)
NVIDIA GH200 Grace Hopper™ Superchip (Liquid-cooled)
NVLink®-C2C CPU-GPU Interconnect
PCIe GPU-GPU Interconnect
Number of Disk Bays (per Node)
4 x E1.S Hot-Swap NVMe disk slots
Expansion Slots (per Node)
2 x PCIe 5.0 x16 FHFL
2 x M.2 NVMe Gen.5 22110
Hardware Services for 2 to 5 Years
EXPRESS (Monday – Friday, next business day replacement of components)
BUSINESS (Monday – Friday, on-site technician, next business day)
EXCLUSIVE (7×24, 4-hour response time)