SSDs: A sensible yet Flash-y option

Flash is a hot topic. The technology has been labelled with an almost magical ability to speed up the performance of storage systems and dependant applications. However, flash is not a magic bullet, and to turn an underperforming storage environment into a superstar requires a bit of planning, as well as an understanding of some of the potential challenges. Rebecca Thompson, VP Marketing for Avere Systems, provides an insight into the technology, challenges and implementation considerations.

  • 10 years ago Posted in

BEFORE JUMPING INTO FLASH, there are two dimensions to consider when evaluating performance. First, there’s the application workload, which can vary across read, write, and metadata operations. In addition, flash if setup correctly can speed up these operations across the full range of access patterns, including random access to small files, sequential access to large files, and a mix of both. The second area for consideration is the amount of performance (in ops/sec) or throughput (in MB/sec) required for a given workload.

Any upgrade or change to a storage environment needs to meet these two workload and performance criteria. However, it is almost certain these parameters will not stay fixed for long as the environment grows, and it is advisable to be able to grow linearly to scale performance and capacity as and when needed. For an IT manager struggling with poor storage performance, it may seem logical to believe slow spinning disks are the culprit. However, there are many factors that impact performance, and the first step to a successful storage upgrade is to test and measure the environment as well looking at where an upgrade can be most effective.

Understanding performance issues
The biggest drag on performance of NAS systems today is latency, the delay between when an action is initiated and when it’s completed.

Three main kinds of latency exist in a NAS system: hard disk drive (HDD) latency, storage filer CPU latency, and network latency. These latency classes are inherent in classic NAS architecture and understanding the issue and how flash can address them is vital for both a successful upgrade and delivering a successful longer-term storage strategy.

As an organisation’s data grows and the capacities of disk drives become bigger and bigger, the amount of time it takes to access a given file increases. Existing solutions to this problem have been less than ideal — NAS systems have moved to higher-speed (10K and 15K RPM) disk drives and sometimes, for an extra increment of speed, they short stroke them (that is, use only a fraction of their capacity) in order to read and write data off them faster. This type of disk overprovisioning takes up a lot of physical space in the datacentre, plus all that rotating mass generates heat and consumes a lot of electricity. Even the newest drives are relatively slow when compared with memory like NVRAM and DRAM, but replacing all the hard disk storage in a NAS system with semiconductor memory would be prohibitively expensive.

Another common issue faced by classic NAS systems is storage filer CPU latency. As all file-access requests pass through the storage filer, the embedded CPUs within the filers can cause another form of latency. If more I/O requests hit the filer than its CPU(s) can handle, queuing occurs and application performance slows to a crawl. The only reliable way to ensure that you get good performance with the classic NAS architecture is to use storage controllers with high-performance CPUs within these systems. Packing a storage controller full of high-end CPUs gives you a performance boost today, but it doesn’t scale unless you swap out filers, which is an expensive option.

The last performance issue is induced by the network. In a modern business environment, remote offices that may be hundreds or even thousands of miles away from the organisation’s datacentres require data to be sent over the network, which is subject to propagation delay that degrades the user experience. The physical bandwidth between the sites may also be limited and could also cause network throughput bottleneck issues. Encryption can further slow down transfer rates due to the processing overhead to scramble the protocols and payloads. For some organisations, data needs to reside within a central physical location for security, compliance or convenience, and the network latency issue becomes significant, but not insurmountable with a bit of re-architecting of how data is distributed and where data resides. So the combination of hard disk latency, storage CPU issues and the wide-area-networks (WAN) together can cause a major barrier to improving storage, but the clever use of flash filers can solve many of these issues in an elegant fashion.

Introducing edge filers
Even though SSD prices have dropped significantly, compared to spinning media, chip-based memory in all its forms is still up to 30 times more expensive than legacy disk drives. Simple replacement of all spinning disks for chip-based equivalents is not financially viable. Instead, in Avere’s approach to NAS optimisation, a flash-based “edge” filer can shoulder the bulk of the processing load at high speed, while the high-capacity rotating storage back at the Core filer holds the bulk of the data that isn’t currently being accessed.


An Edge filer requires updating the traditional data management model to take advantage of the performance advantages of flash. The technology overcomes the traditional and costly approach of adding larger and more expensive controllers and over provisioning CPU and all types of high-speed storage media to boost performance. Instead, an Edge filer sits in the communication path between the user and the NAS Core filer at a location that is nearest to the user. If the user is located at a remote site and therefore at a distance from the datacentre where the NAS Core filer resides, you can largely eliminate WAN latency by placing the Edge filer physically close to the user at the remote site.

Since the Edge filer contains the hottest data, which is being accessed most frequently, the round trip transit time from the user to the data is minimised. The long trip back to the NAS for the cold data needs to be taken only a small fraction of the time, and overall performance is almost as good as it would be if the datacentre were right next door to the remote facility. Edge filers can include a mix of SSD and Flash memory as well as SAS drives to form a high performance Edge filesystem cache that sits in front and aggregates data into an access layer ahead of the slower storage.

Building a core to edge strategy
IT managers should consider a few key criteria when adapting a storage environment to take advantage of the boost offered by flash.
First, organisations should think about the longer-term trends. The goal might be to create a predictable and scalable storage platform, so in this case consider optimising the environment by moving to a tiered approach consisting of cheaper and slower Core filers that are enhanced by faster Edge filer components. Locate the Core storage where hosting costs are low and high-performance network bandwidth is available. Provision it with high-capacity, low-cost hard disk drives. Then locate Edge filers closest to the users, populated with high-performance storage that will always hold the data that is being most actively accessed. Place all the NAS resources under a single global namespace to hide any complications that a heterogeneous hardware or multi-filer environment might create.

This approach provides a good balance of cost versus performance, allows legacy storage to remain, which extends ROI and ultimately allows simplified scalability through the addition of more Edge filers. There are many organisations that have already benefited from this approach, including film producer James Cameron’s Digital Domain Productions. As cost of flash capacity cost continues to drop, this approach will become the new normal.

 

 

Box


Speeding up operations

WAN latency can be an expensive problem for any business that has facilities distributed across distances measured in miles. One industry that is severely affected by this problem is the digital production segment of the motion picture industry. Digital Domain Productions, co-founded by celebrated director James Cameron, is a leader in the industry. [

A big part of Digital Domain’s job is to seamlessly combine live action with virtual characters and scenery that’s added digitally. Each frame of a motion picture requires massive computing at some centrally located render farm, consuming information provided by digital artists located elsewhere.

For cost reasons, Digital Domain’s render farm is located in Las Vegas, but its artists are in Los Angeles, San Francisco, and Vancouver, BC. Even with powerful computers, it takes hours to render a single movie frame. Motion pictures are typically shot at 24 frames per second. Multiply that by 60 seconds per minute, 60 minutes per hour, and two hours per feature film, and you get a sense of the magnitude of the workload.

Digital Domain couldn’t afford to locate its large render farm near the high-rent districts where its artistic talent lives, which is why it liked the idea of locating it in Las Vegas. However, it couldn’t afford to locate it in Las Vegas either because WAN latency would negatively impact performance.

The network transit time from Las Vegas to San Francisco or Vancouver was just too long, until it discovered the Avere Edge filer solution. Digital Domain placed Avere FXT 2550s at the colocation facility with its render nodes. In the facility, the FXTs inspect the data that the render nodes are requesting from the remote data storage in Los Angeles, then automatically tier and store the active data set on the RAM and SAS drives internal to the FXTs to maximise IOPS and minimise latency.

Digital Domain has also placed Avere FXT 2550s at the three regional locations to provide for local acceleration and access to non-local storage. Data written onto FXT nodes after rendering automatically goes back to storage at the site that originated the source material. Due to the nature of Digital Domain’s work, the Avere Edge filer solution speeded up operations by 250 times, turning a totally infeasible situation into something that met the company’s needs.