Cloud has undoubtedly opened up new and improved ways to do business. According to the blog of Forrester analyst, James Staten, growth and maturity over the past three years has cemented cloud services and cloud platforms as an essential part of the IT landscape.
With public cloud predicted to grow to $191 billion in just six years, and private cloud, according to IDC, expected to see a compound annual growth rate of more than 50 per cent from 2012 to 2016, we are now seeing a shift in purchasing sentiment from the exploring to the rationalising phase of how cloud services can fit within a businesses’ overall IT portfolio.
With such fervour around such virtualised environments, it is no surprise that IT leaders such as IBM are making that transition to software and cloud, despite this year celebrating the 50th anniversary of its mainframe. This behaviour represents the seismic shift the IT world is gradually recognising. A big challenge is to ensure the move does not leave enterprises without the option of running their applications in a reliable and available environment, particularly as organisations are demanding that applications need to be `always on’ and accessible.
Aberdeen Group’s 2013 study,`Downtime and Data Loss: How much Can You Afford?’,found that the average cost of downtime per hour was almost £100,000. Fear of an outage and the subsequent financial and reputational damage downtime has is holding many organisations back in experiencing the agility and flexibility that comes with the cloud. This, combined with the media reports that come when a high profile company experiences an outage, rightly demonstrates that downtime is not an option. Think Apple or RBS.
The fact is, organisations can no longer depend on traditional hardware or software techniques in this new cloud landscape. If a business consolidates five servers to one, that one server can then become mission critical. To move legacy applications written for traditional IT environments to the cloud typically involves rewriting them so that availability is built into the application itself.
This can be a costly exercise and as such organisations have veered away from doing so. Consequently, they are choosing to either forgo availability all together or they are leaving their applications where they are and forgoing the advantages of running them in a cloud.
This trade-off should not be required. In fact organisations should be able to run an environment that supports all of their applications, even those that require different levels of availability.
It is because of the availability concern that organisations need to go beyond the question – “how can cloud services fit within my overall IT portfolio?” – and rather ask “how can cloud services fit within my overall IT portfolio and ensure that they remain always-on?”
Software defined availability (SDA) is the next generation of availability that uses the elastic nature of the cloud to ensure organisations meet the demand for an always on environment now, and in the future. It does this by dynamically moving applications to the appropriate environment depending on the level of availability required.These applications have traditionally been built with availability at the hardware layer and as such have relied on recovery techniques including running on fault tolerant servers or via clustering.
With SDA, downtime prevention and recovery decisions about availability are moved out of the hardware into a software layer that matches applications to the right infrastructure for the right level of availability.
So, what are the benefits of SDA? Firstly, while the industry seeks to find a solution to build cloud-ready applications easily, let alone highly available cloud applications, SDA allows organisations to do this now, by allocating the right level of availability at the right time on a per workload basis. With this approach organisations can run their cloud much more efficiently without overpaying for availability when they don’t need it. But still have it when it is required.
Secondly, incorporating an SDA layer when building cloud infrastructures, or moving legacy applications to the cloud, will enable enterprises and solution providers to dynamically move applications between availability levels in a single cloud environment based on whether they are business critical, important, or general purpose.
It can also build this flexibility into one holistic environment by changing the levels of availability based on the needs of specific applications during a given period. Organisations have the assurance that their applications will remain up and running during critical periods and can automatically move them to less costly availability levels afterwards, without the need to rewrite them.
Above all, SDA enables companies to run more efficiently and negates the need to pay for availability when it is not required. When applied to the cloud and services built in an OpenStack environment, SDA results in a more reliable, efficient cloud infrastructure, and one that enterprises and solution providers can embrace fully without hesitation. A model that can reduce complexity, and the level of IT skill required, is an attractive proposition and worth implementing.
There are a lot of things that have changed with the move to the cloud. Using the same availability techniques that have been used for the past 50 years doesn’t make sense when everything else is changing. Creating an SDA solution specifically for cloud will bring availability techniques up to date to deal more effectively with how our world is changing, and will equip organisations to operate more effectively now and in the future.