When planning for capacity issues and implementing change in a data centre, there is no shortage of information about the operations of the various equipment types. Thanks to the self-monitoring functionality built into everything from UPS to a server or disk array to an air-conditioning unit, one can find out in an instant the status of almost any piece of equipment.
How much disk space is being occupied in this array? What is the level of utilisation of this blade server? What are the temperature and humidity ratings in this server cabinet? How much power is needed in this room? The answers to these questions are often readily available from the equipment itself.
But what is the best way to collate all this data and turn it into information to support efficient capacity and contingency planning? Many operators, especially of small- to medium-sized data centres, put their faith in their own experience and expertise supported by that most versatile of software tools: the spreadsheet. Truly the Swiss Army Knife of the modern manager, the spreadsheet is ubiquitous, easy to use and endlessly customisable and as such can be set up to answer most of the questions its individual user wants answered.
Therein lies a problem, however. As a personal productivity tool, the spreadsheet tends to produce solutions customised to the experience of their author rather than the likely issues faced by a data centre in continuous change. As new issues arise, spreadsheet-based solutions become ever more complex and indeed tortuous, frequently only comprehensible, and barely at that, to the person who devised them in the first place.
Much better information is derived from the use of Data Centre Infrastructure Management (DCIM) software, for example Scchneider Electric’s StruxureWare for Data Centers™ DCIM software suite, which provides a sophisticated top-down view of the core infrastructure systems supporting a data centre. The tools including power, cooling and other environmental-management systems, allow managers to proactively identify and resolve issues before they arise. As well as providing a dashboard-like view of the infrastructure supporting a data centre, they make use of current and historical data to answer a range of questions necessary for efficient operations.
Do I have sufficient power and cooling equipment to handle a proposed increase in server density? Have I over-specified the amount of cooling needed and how much is that costing me in energy bills and environmental charges? Where will I place my next server? What will be the impact of new equipment on my redundancy and safety margins?
Such software is becoming widely used in large data centres where 24 x 7 uptime is essential and complexity of operations is taken as a given, but who doesn't need continuous operations nowadays? At the end of the day it is not the size of the data centre, or the complexity of its equipment but the importance of the data and applications hosted within that is most vital.
Concerns that deploying a DCIM solution in a small data centre might be an extravagance should be measured against the benefits of foresight, and operational efficiencies that will ensue. Nobody should use a sledgehammer to crack a nut, however, is a pocket-knife the best tool to fix problems in a mission-critical IT environment?