Are the days of the large, powerful server and the complex, all-encompassing operating system finally about to come to an end? Is the ultimate architecture of cloud delivery systems going to be something rather different - perhaps many millions of small micro-servers running single function applets rather than huge brutes each running many tens of virtual machines?
One person who feels that this is the way cloud delivery is moving is Matt Quinn, CTO of Tibco, who sees a future coming where the operating system is replaced by the browser, and the server needed to run it is a single-core `shadow’ of its current self.
“There are some developments going on where it becomes possible to run all the functionality needed to run an app inside the browser,” he said at the recent Tibco Transform event held in Paris. “That raises the possibility that the operating system as we know it now becomes irrelevant, which also means that the large commodity servers running lots of virtual machines that are common today will also become irrelevant.”
This could have stayed an interesting item of speculation about the future had it not been for a passing comment later made by Toby Owen, the Head of Technical Strategy in EMEA for Rackspace. He referred to the company’s acquisition last year of a company called ZeroVM.
“This uses a container-based approach that won’t need an operating system or a hypervisor,” he said. “It is not a product yet, but we do plan to introduce a sandbox environment soon so developers can investigate it.”
The software foundations for the fundamental change Quinn foresees are therefore very much in place already. The next question, therefore, is what can be expected from the change?
This can be divided into two areas: the hardware and software architectures, and the operational uses and changes that become possible.
In the architectural camp the main change will likely be the demise of the large servers running many VMs. Those machines may be considered commodity items, but they are still not cheap to buy, and the bigger they get the more complex the operating system and VM management environments have to get to make them work even remotely efficiently.
It would not surprise me if in the near future the issue of poor server utilisation raises its head again as the machines end up spending more and more of their resources managing themselves rather than doing productive work.
Instead, the servers will become tiny machines, maybe even using just a single processor core and capable of running just a single process at a time, inside a browser rather than an OS. How small will they be? Well, let us not forget that last year Intel demonstrated a single chip PC in an SD format mounting. Given the very nature of semiconductor production processes, having proved it can be done it becomes possible to make them in huge volumes – and the bigger the volume, the lower the unit cost.
Rather than datacentres boasting of having 4,000 servers available, how about boasting of having 4 million, or 40 million?
As for managing that type of environment, most of the concepts are already well-established in the form of parallel processing environments in the world of supercomputing. Managing many thousands of process threads, simultaneously, is meat and drink to that world.
And there is already a growing realisation that one of the underlying process management changes that comes with the cloud is that the lifecycles of applications are getting shorter. Long gone are the days of 18 months of requirements planning, five years of coding and testing and `n’ years of work in production. Instead, the lifecycle of an application – from conception to termination – can be measured in months, often weeks now. Soon enough it will days or hours
They will be designed to perform just one specific task `now’, not cover off every base of possibility for the next five years. The lifecycle will be: `create -load - do the job - out - deleted’.
That does not need huge servers, or complex operating systems. What it will need instead is an over-arching, policy-driven, analytics-based, event management environment to oversee the running of processes in order to achieve a business object.
When it comes to what such an environment could achieve for business, the obvious starting suggestion is cost savings. This has always been the first suggestion with cloud, (years ago I thought that too) yet in practice it never works out that way. At one level, using one SD card `server’ for a day in a box of several hundred thousand will probably be measured in fractions of a penny, but you will end up using many more than just one, I am sure.
So forget costs as an issue and instead think of revenue and profits. They come from greatly increased flexibility and agility. If one of these servers can load a function applet, run it, transmit the result, clear down, and be loading the next applet, all in half a second, the scope for its application really is only limited by the imagination of those applying it.
The practical upshot of this is likely to be that future apps will be written by the end users - where `written’ actually means something like `conceptually outline it’. As Quinn observed, “we are reaching a stage where all applications will be in beta. In practice, by the time they reach a point of being `finished’ that will probably be the time they are killed off.”
That means the technology underpinning business processes will be able to change as rapidly as business people want to change processes. And getting a process `wrong’ will no longer presage the imminent death of the business, for each problem app will be small, and even if it cascades across many servers and processes the policy management system will probably spot the problem, stop it running and flag the issue to the creators. A good one may even suggest a remedy.
Security too becomes easier to manage. If all of these tiny servers are single function and running the same browser, defending it against malicious attack becomes easier to manage. And if something malicious does get in to an individual server, it becomes easier to isolate and destroy. The management environment would probably end up achieving that without operators or users noticing anything had happened.
That management environment would become an obvious target for hackers – but it would be a centralised resource and therefore easier to defend. And in addition, as a policy-driven system it would largely self-defend anyway……. “am I supposed to be doing that?..... No? …… Kill it then.”
This is still, of course, largely speculation on my part, but it is now clear that something like this is starting to roll.