ElasticHosts claims first with real-time scalability

The CSP offers real-time pricing of actual cloud service consumption as new Linux technology allows creation of dynamically scalable, logical service containers for each cloud customer, which means real pay-for-what-you-use cloud services for the first time

  • 10 years ago Posted in

Technology that has become an accepted part of the Linux Kernal over the last year was spotted by Richard Davies, CEO of UK-based Cloud Service Provider ElasticHosts, and his team over a year ago. He realised they had identified tools that could be used to build cloud service containers into which each customer can be `poured’. In effect, each one could be serviced inside its own logical `server’, which could be dynamically scaled to whatever resource size was needed at any point in time.

All users have to do is specify a maximum size to that container. But even then not spend too much time on deliberating the size, for it can say almost any size they like.

“They may consider the maximum workload they will ever have will requires a maximum of 8 Gbytes of memory to ensure a reasonable response rate,” Davies said, “so they specify 12Gbytes – and never use it. And because they never use it, they will never pay for it.”

That is the difference ElasticHosts is now offering, and which he sees giving the company at least a one year in the marketplace. It will be that long, Davies reckons, before the earliest players in the CSP market manage to catch up.

The new Elastic Containers are, the company suggests, the first cloud servers to be billed to customers purely on consumption, rather than pre-defined capacity requirements, delivering cost savings that Davies suggests are typically in the 50 percent range and, depending on workload, often more and rarely less.

ElasticHosts uses auto-scaling based on the developments in the new Linux Kernal. These have allowed the company to develop containers that are single, logical servers to hold a specified workload by a customer. The technology elastically expands and contracts to meet customer demands, entirely eliminating the need for manual provisioning.

In practice, therefore, a typical SME customer can have its entire business world within a single, logical container that requires no additional software or server configuration; customers simply sign up for the service and capacity is continuously available.

Elastic Containers are available to all Linux users from ElasticHosts’ datacentres, backed by Solid State Drive (SSD) storage for added performance. The company now has nine datacentres around the globe, with two in the UK, three in the USA, and one each in Canada, Amsterdam and Sydney.

The key to the pricing model is the fact that it can now track customer usage very closely, and can adjust the compute resources available to each container pretty much in real time. So as the workload changes, so the ElasticHosts system will add or remove resources in real time.

The minimum logged time period is 15minutes, so the difference between what resources are used and what are charged for is generally small. This compares favourably with accepted current usage and billing models where users can be significantly over-charged for usage.

An important part of this new service is the fact that ElasticHosts can present users with detailed graphs of their usage run-rate, showing the changes in service demand as they happen. This is generated from the monitoring data used to manage their service provisioning.

“This will be the first time that customers can actually see what resources they are consuming, and determine when they were being over-charged,” Davies said. “It will also identify the times when their demand rises to the point where goes over the limits set in their old contracts. These are the times when there staff or customers found the service level degraded or stopped, and customers got a 404 error message.”

Davies identified three main service scaling problems that exist today: fairly constant, known peak requirement, and automatic load balancing.

“Many users have operations that are fairly constant, with only the occasional peak in demand,” he said, “but they have to accommodate those peaks in their service plan, which means that most of the time their service is significantly wrongly-sized, and they have to pay for that.”

Some users will have a known peak requirement and will have a contract to cover that. But these take time and expertise to set up properly, and because they often require the additional resources to be manually spun up and brought online can still be quite crude in operation. So there will still be delays in the service ramping up where service levels to users and their customers can be negatively affected and periods when they are being over-charged for services that they no longer require.

“There are some services, AWS for example, that offer automatic load balancer services. These allow rules to be set up on the levels of available service provision. But they are still pretty crude on the basis of adding or removing a server at a time,” he said. “But this approach still then needs configuring, which requires expertise to do correctly and quickly.”

It also means that the tracking of demand, and the associated change in resources, is on the basis of large, server-sized chunks of resources, which again can mean times of poor service delivery or being charged for resources that are no longer required.

“These bursting mode service offerings are still always behind the curve of what resources the customer is actually using,” he added, “ whereas we can provide just about right on real time load management.

"We've analysed hundreds of servers from some of our largest customers and noticed two major differences: firstly, a server running a typical workload will see 50 percent cost saving versus other major IaaS clouds, since typically less than 50 percent of total capacity is used through a full weekly cycle. Secondly, a server which frequently runs below its peak capacity, either due to idle periods or because it only occasionally needs to handle a large load, can save 75 percent or more."

According to Davies, this new approach can also reduce disaster recovery costs. Elastic Containers can strip out a claimed 80 percent or more of these costs, as a fully-configured version of the primary server – commonly known as a ‘hot spare’ – can be running continuously. This removes the need for companies to replicate between 50 and 100 percent of their servers as ‘hot spares’, constantly provisioning them at full capacity: effectively paying for capacity twice.

Commvault provides cloud-first organisations with greater choice and flexibility to protect and...
On the morning of September 20, Executive Director of the Board of Huawei and CEO of Huawei Cloud...
Global IT Business-to-Business (B2B) revenues, coming from data centers, IT services and devices,...
CrowdStrike has unveiled AI Security Posture Management (AI-SPM) and announced the general...
Research released recently shows that 67% of IT decision makers favour a hybrid hosting...
New private cloud contract re-affirms HPE GreenLake Cloud as a core pillar of Barclays’ hybrid...
CAS leverages upgraded mission-critical private cloud environment to support cutting-edge,...
AWS’s planned investments are estimated to contribute £14 billion to the UK’s total GDP over...