Vertiv has introduced new configurations of its Vertiv MegaMod HDX, a prefabricated power and cooling infrastructure for high-density computing environments, including artificial intelligence (AI) and high-performance computing (HPC) deployments. These configurations seek to offer operators the adaptability required to support escalating power and cooling needs while optimising space utilisation and deployment speed.
The MegaMod HDX combines direct-to-chip liquid cooling with air-cooled architectures, addressing the thermal requirements imposed by AI workloads. This provides support for pod-style AI environments and sophisticated GPU clusters. The compact model has a standard module height, accommodating up to 13 racks with power capacities reaching 1.25 MW, while the extended-height combo version houses up to 144 racks and supports power capacities up to 10 MW. Both configurations back rack densities from 50 kW to more than 100 kW per rack.
The MegaMod HDX solutions feature a hybrid cooling architecture, merging direct-to-chip liquid cooling with adaptable air systems in an integrated, prefabricated pod. The distributed redundant power architecture supports continuous operation even when a module becomes inactive. A buffer-tank thermal backup system permits GPU clusters to remain stable during maintenance or load transitions, ensuring operational precision.
Combined with Vertiv's portfolio of power, thermal, and IT management solutions—like the Vertiv Liebert APM2 UPS and Vertiv CoolChip CDU—the MegaMod HDX configurations seek to boost reliability and scale. Supporting scalable, high-performance infrastructures, these solutions are engineered to meet escalating density demands.
Vertiv's suite of IT rack infrastructure solutions is tailored to accommodate and support diverse IT system needs, enhancing functionality with products like the Vertiv racks and combinational CoolChip solutions.
By leveraging factory-integrated designs, precise deployments are delivered cost-effectively, accompanied by comprehensive end-to-end support through Vertiv’s global service network. This model supports customers in scaling AI infrastructures confidently and with operational efficiency.