Some of the world’s largest datacentre operators are using their influence to propose radical changes in server design, according to a report from 451 Research.
The Open Compute Project (OCP), a user-led organisation whose members include Facebook, Goldman Sachs and NTT Data, wants to see the core components of system design, including processor, motherboard and networking interconnects, “disaggregated” so they can be upgraded independently. The scheme is in marked contrast to the current industry trend of converged systems combining servers, storage and networking into a single system.
Convergence has gained some traction with customers in recent years due to the relative ease and speed of deployment that pre-integrated systems enable. But there are trade-offs in terms of cost and vendor lock-in. For the largest datacentres, buying systems at a more granular component layer promises more flexibility, higher density and significant cost reductions.
“Current monolithic designs can’t easily be customised to fit specific workload requirements or to maximise efficiency,” said John Abbott, Distinguished Analyst at 451 Research. “And customers can’t, for instance, take advantage of the latest high performance CPU without having to upgrade surrounding technologies that are still operating well.”
Just how much of an opportunity or threat such developments pose to the traditional systems and storage vendors will be discussed in depth during a session at The 451 Group’s forthcoming Hosting and Cloud Transformation Summit, London, April 10th: Converged IT infrastructure – Adoption and Impact.
451 Research’s report follows OCP’s recent unveiling of two key projects to kick-start disaggregation: low-latency interconnects using silicon photonics for linking components at both the motherboard and the rack layer; and a new common slot architecture that should enable fully vendor-neutral motherboards to remain in use through multiple processor generations. Chip giant Intel has contributed its silicon photonics technology, and the Taiwanese systems maker Quanta has built a prototype to prove the concept out.
“It’s a radical step, and a more granular level of standardisation than the big system vendors have ever quite managed – or perhaps wanted - to implement on their own,” said Abbott. “And it’s already opening the door to a new set of system suppliers more accustomed to building systems to order and within a tight budget: the original design manufacturers (ODMs).”
Mega datacentres could benefit from deploying their CPUs, I/O, memory and storage in separate racks, enabling upgrades to take place independently, eliminating performance bottlenecks and improving such operational aspects as reliability, utilisation, footprint and energy efficiency. And over time, smaller customers could see similar benefits. But there’s plenty of work to be done: standards must replace the current open hardware specifications, and these must then be married seamlessly with modular, interoperable and stable open software stacks, tying the disaggregated components back together again through systems management products.