Go hybrid: why you need a balanced diet of SSDs and HDDs

Cost, speed and reliability - the three crucial factors when it comes to choosing your next data storage system, writes Warren Reid, Director of Marketing at Dot Hill Systems.

  • 11 years ago Posted in

On the face of it, speed appears to be the ultimate goal. Everyone wants access to their data in real-time, and Solid State Disks (SSDs) and flash promise just that. With world-records broken on a regular basis, 1 million IOPS is rapidly on its way to becoming an SSD standard.


Super-fast storage with real-time access to data and ultra-low latency is perfect for business-critical applications where often even waiting just a few seconds for data can have huge implications. In the financial sector for example delays can cost millions; this is why many businesses are now opting for flash. High-performance storage brings an array (pun intended) of advantages such as increased application responsiveness and business agility.


But speed isn’t cheap. A recent paper by Gartner “Solid-State Drives Will Complement, Not Replace, Hard-Disk Drives in Data Centers” predicts that even in ten years’ time the cost per Gigabyte (GB) for enterprise-level SSDs will remain more than 25 times the cost per GB of the same grade of Hard Disk Drives (HDDs). In addition not every business has that kind of IT budget especially when it’s not really necessary to have immediate access to every piece of data your organisation produces. The alternative to SSDs is the more traditional hard-disk technology: the cost is lower and capacities are relatively high. But performance is significantly slower. This is not an issue when you think about the tremendous proportion of non-critical data stored by most organisations including old emails and copies of data that must be kept for internal or external compliance reasons. For such data traditional spinning disks are the perfect medium. Not so if speed is of the essence.


The solution is a carefully designed balancing act, the main driver behind the fast growing adoption of hybrid storage, an approach endorsed by several analysts.


By using a combination of flash and traditional disk, organisations are able to prioritise and move data between the two, placing it on the faster or slower storage depending on how quickly it needs to be retrieved. The location can be swapped as the value of the data changes for example at the end of the month/quarter/year in the case of sales or financial information. When this is achieved by bolting SSDs onto existing hard disk systems it is known as a hybrid approach. This is in contrast to flash-only solutions which can exist as external networked arrays or internally within an application server.


So far, so good. But how do teams, small, overstretched teams, work out which data should be accessible at a moment’s notice? And with the added virtualisation element how, for instance, can they predict when and which data set is going to be most in demand?


Auto tiering has been hailed as the answer. Often seen as a must-have technology, auto-tiering analyses the data workload and moves the most in-demand data sets to the most effective storage level. It is autonomic and predicts access patterns and total capacity needs in the short and long term. This sounds ideal but in fact most systems on the market today move data from one tier to another as a batch process with batch windows sometimes as long as 12 hours, so getting your hands on real-time data can still be difficult. New technologies promising intelligent tiering in real-time are set to eliminate that problem. These are true hybrid systems that combine both HDD and SSD so you don’t have to invest massively in SSD technology to extend the life of your old HDD system. Typically between five and ten per cent of your total capacity deployed as SSD is sufficient to accelerate ‘hot’ data sets under normal operation.


Because these systems have been built to support mixed access and workloads requirements, they work much more intuitively. An auto-tiering solution will generally consist of two or more tiers, perhaps one tier of SSD combined with both enterprise and near-line class SAS drives. Using advanced algorithms an automated system should only move data to SSD when it is certain that the storage performance requirement is going to outstrip the capabilities of HDD. It then continuously monitors the workload on that data set to move it back down to less expensive HDD if and when the demand subsides. SSD used in this way becomes a key enabler for application acceleration.


In these environments storage features like thin provisioning and snapshots, are also built specifically for a hybrid architecture so their performance is not undermined by either tier.


A key characteristic of an automated tiering system should be simplicity of installation and operation. Many organisations either do not have in-house storage administrator expertise or in fact want to be concerned with allocating the right data to the right storage technology. Auto-tiering should allow them to run their business with the peace of mind that their storage system will automatically absorb peaks in data throughput allowing them to meet their service level agreements whether internally or externally.


Of course there are other ways to deploy flash technology with each having their merits and pitfalls. Several external flash-only storage arrays exist today but the end user needs to determine really how much expensive flash based storage they need. The flash only approach also leaves the user with the problem of deciding which data really needs the high performance of SSD and of course that requirement may change over time.
Another approach is to move the flash storage internal to the actual server rather than residing on the storage area network. In this scenario the flash technology generally acts as a cache, however caching usually accelerates every I/O transaction that passes through so both ‘hot’ data and less urgent transactions receive flash acceleration. Some vendors provide tools for virtualised environments that can retrospectively analyse which virtual machine (VM) presented the highest data workload through the internal flash storage. Using this data an individual VM can be given exclusive use of the flash memory, but of course this approach is analogous to the batch process used in many auto tiering systems, you have to assume that today’s busiest VM will be the same one tomorrow. So the internal flash approach can be a great way to accelerate application performance for deterministic workloads, i.e. where you know which data set is going to be in big demand day after day. There are other considerations to be aware of with the internal flash-only approach, moving storage back into the server may then present a single point of failure unless the environment is clustered with cache coherency established between one or more server nodes.


So for the majority, the simplicity and failsafe approach of a real-time automated hybrid solution will provide the most efficient way to deploy flash technology. These hybrid technologies have been designed in response to the ever-growing demands on the data centre by users needing to store, retrieve and share the avalanche of data that is already being produced every minute of every day, and the immense amount predicted for the coming years. And they’ve been built specifically to eliminate any compromise in the golden triangle of speed, cost and reliability.
 

ATTO Technology has published the findings of an independent survey of IT decision-makers from...
NetApp extends its collaboration to accelerate Ducati Corse’s digital transformation and deliver...
Delivering on the promise of SSDs that address future enterprise infrastructure requirements KIOXIA...
FlashBlade at Equinix with Azure for EDA: industry first validated solution to leverage...
Infinidat says that Richard Bradbury has been appointed SVP, EMEA & APJ. Leveraging his extensive...
New storage automation and delivery platform and cloud native Database-as-a-Service offering bring...
Leveraging its strength and leadership in flash, Western Digital has launched the new WD Red SN700...
Nutanix has added new capabilities to the Nutanix® Cloud Platform that make it easier for...