Disrupt or die is the new battle cry for software innovation

By Anand Krishnan, EVP, Cloud and General Manager, Canonical.

  • 7 years ago Posted in
Where there’s innovation, there’s disruption. No matter which industry you are working in, there’s always a disruptor. Netflix in media, Amazon in retail, Skype and WhatsApp in telephony, all offering customers something faster, cheaper or simply better than what went before. We have become used to technology being at the forefront of disruption, but what we are looking at today is software-defined disruption. Technical innovation is nothing new, it’s been disrupting industries for decades. But disruption can be disturbing for some and we see companies in multiple vertical sectors looking over their shoulders to see which start-ups are going to change or influence their markets.
 
This is the landscape that Canonical and our customers play in, and one which is shaking up the thinking of savvy enterprise companies. When we get a call, it is typically to solve one of two problems as expressed by the customer: either the company is looking to adopt an entirely new workload: AI, machine learning, Kubernetes or containers for example. Or, and this is increasingly the case, they have explicitly identified a competitor who poses an existential threat to their business, and the conversation is about how best to head off that threat through new capability. In practice, these become the same meeting. Amazon buying Whole Foods puts every established supermarket chain on watch for the inevitable disruption this will bring. Our conversation is about leaning in to beat that trend rather than be left behind by it.
 
Software-defined businesses of the future – be brave and take a leap of faith
One weapon available to everyone, incumbent or disruptor, is Open Source software. At its core, open-source gives you a great deal of capability, available both in libraries of data and code, and also in the millions of individuals working on those libraries. But open-source also means a chance to share & crowd-source innovation. By sharing your efforts to solve big, hard problems, you invite the world to help improve the answers in a way that any single company would struggle to do. It is now normal to find entities ranging from Walmart and Carrefour to eBay loudly open-sourcing their work & inviting anyone interested in retail innovation to help contribute & leverage this shared body of work. Sharing is always a leap of faith, but one that is well worth the effort, in this era of disruption.
 
And while open-source has typically been assumed to be referring to software, the concept is just as applicable to Operations i.e. how that software is operated in a datacentre. For example, Deutsche Telekom and Bell Canada, two large telcos on different continents, share a similar approach to operations for their next-generation network infrastructure. They have decided that sharing the same underlying models of their IT infrastructure gives everyone a competitive advantage. If one of them makes a marginal improvement to a piece of their own stack, everyone else using that stack benefits. Differentiation can now be focused on services delivered to end-users rather than the underlying servers they run on. We are going to start seeing a world where infrastructure and operational knowledge become a commodity, and where crowdsourcing of IT becomes something we do as a matter of course.
 
Big Software – dealing with  infrastructure complexity
One reason that this crowd-sourcing of IT is inevitable is that we are now dealing with a different class of software.
 
Most legacy infrastructure, that takes up the bulk of the budget and the floor-space in enterprise data-centres, is typically comprised of monolithic, slow-changing applications - like a database server - running on a relatively small number of machines. But take any cutting-edge software capability today - machine-learning, big-data, or indeed an OpenStack architecture - and it must be integrated, configured and tuned in a way that is specific for each group of users.  The resulting solution is typically compiled from multiple, disparate sources  and then deployed across elastic infrastructure that could scale to thousands of servers. Change - patches, versions, configuration updates - is assumed to be part of the daily beat not a special event. Operations at this scale & speed is a different, and far more complex, problem.
 
We coined the term “Big Software” to represent this class of at-scale software which organisations now rely on to stay ahead. Any innovating organisation must expect to ingest and rely on growing amounts of Big Software. There is simply no way that any organisation can ramp up & maintain operational expertise of this new type rapidly enough to keep pace with the business imperative. But they can get there by open-sourcing their IT Operations knowledge. Doing this involves encapsulating operational expertise in intelligent open-source ‘models’ that are iterated on by many organizations at once. Those ‘models’ become the automation backbone for Big Software, delivering speed and economics that legacy IT approaches can only dream of.
 
    Automation is the key to setting us free
We believe that this shift in approach to ‘model-driven’ automation of IT described above is inevitable. Because the economics say so.
 
Companies routinely pour 80% of their IT budget into simply operating existing infrastructure; running required installations and upgrades and basically keeping the lights on. That leaves just 20% of the budget for innovation and this is the shortfall that disruptors are able to exploit. If your business is to grow and remain competitive in this software-defined age, that dial needs to move the other way, and quite substantially.
 
It starts with getting past the mindset that IT operations have to be done by hand. That approach was adequate for the decade gone by but in a world of at-scale infrastructure and agile, oft-changing, composable workloads being ingested from a variety of sources, every IT organization has to think of running their datacentres the way a Google or an Amazon runs theirs. This means automation as the default - moving beyond simple scripting of batch processes to truly intelligent, model-driven operations that allows IT staff to completely offload the routine and spend time on competitive differentiation.
 
We work with customers across industries to bring the power of model-driven automation to their datacentres. For example, we have a pharmaceutical customer for whom we built a research cloud. In under 90 days they went from concept to being able to deploy, configure, stop and start apps, across thousands of machines. Where once their IT team would have done manual & one-off work across tens of machines, now model-driven operations allows that team to crunch big data sets, at will, on cloud infrastructure that just works. The pay-off here is in the pace of discovery - shorter time-to-market with new drugs, new chemicals or molecular entities - with the potential return measured in billions per patent. True model-driven automation pays for itself and then some.
 
We are in an extraordinary period of creative disruption. Software innovation isn’t about making job cuts to slash the bottom line. It’s about letting go and making the software do the work, so companies can enable people to be smarter, move faster and innovate. Disrupt or die is the new battle cry for both challenger and incumbent, in this software-defined era.
By Kashif Nazir, Technical Manager at Cloudhouse.
By Richard Eglon, CMO, Nebula Global Services.
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist – Business and...
By Krishna Sai, Senior VP of Technology and Engineering.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Aleksi Helakari, Head of Technical Office, EMEA, Spirent and Patrick Johnson, CMO, APNT - a...
By Dave Longman, Head of Delivery, Headforwards.
It’s getting to the time of year when priorities suddenly come into sharp focus. Just a few...