As someone who has been in the storage industry for almost 40 years, I’ve amassed a huge body of knowledge about the key issues that storage professionals need to be considering today. Here are the five critical topics that enterprise storage specialists need to understand to be successful.
After you have read what is in this article, let me know what you think about my perspectives and the topics of interest. Do you agree with me? What insights do you have to share? How do you see these different topics? What have you learned? Join the virtual water cooler talk using hashtag #InfinidatTalk.
1. Explosive growth of data by 2025. What does it mean for storage?
The global datasphere continues to grow exponentially. IDC has tracked an increase in data from 33 zettabytes in 2018 to a projected 175 zettabytes by 2025, while Statista projects data growth to reach 180 zettabytes by next year.
This explosive growth in data is symptomatic of how much data is being used – and therefore, creates significant demand for enterprise storage. Here’s the key point for CIOs to consider as they develop their future plans: just because there is enormous growth in data and the increased need for more storage capacity, enterprises don’t need to have “explosive growth” in their IT budgets to handle it.
Given this severe budget limitation, storage consolidation is a strong, powerful strategy to adjust to huge leaps in data. Enterprises should be striking down complexity associated with traditional approaches to dealing with data growth – more, more, more. No, a smarter strategy is less, less, less - enterprises need to take a forward-thinking approach to avoid being caught on their back foot when the implications of explosive data growth emerge.
In addition, this increase in data will exacerbate the already tremendous IT skills gap. There will be a greater need for more qualified IT professionals to be able to help enterprises navigate through the overwhelming bursts of data. But there are not enough capable IT pros available to direct at the problem. This problem means enterprises need to simplify their storage infrastructure and data centres to adeptly handle multi petabyte volumes of data and hybrid cloud storage configurations. They can achieve this by leveraging services-oriented automation to automatically and autonomously store their data whether it is held onpremise or in the cloud.
Just to put these data numbers in perspective: if astronomers are processing 10 petabytes of data every hour from a telescope, such as the Square Kilometre Array (SKA) telescope, one exabyte of data is generated every four days of operation. It takes 1,000 exabytes to equal one zettabyte.
According to University of California researchers, if all human speech ever spoken were digitized into 16-kilohertz 16-bit audio files, it would be over 42 zettabytes. This speaks to the magnitude of the datasphere that has taken shape and has made the storage of data, in a general sense, extremely important to the future of humankind.
2. Cybercrime is getting worse. How can storage help?
Cybercrime is a $9.5 trillion attack vector, according to Cybersecurity Ventures. It’s not shrinking. Rather, it’s expanding again, enormously, which makes it all the more problematic that too many enterprises are leaving their enterprise storage systems vulnerable to cyberattacks due to lack of intelligent cyber resilience and recovery.
Even though a company’s most valuable data is stored on their enterprise storage systems, it’s ironic that storage is often left out of a company’s overall cybersecurity strategy. The question is not “if” your enterprise will suffer a cyberattack, but “when” and “how often.”
According to a 2023 Fortune 500 survey, the second biggest concern of CEOs is cybersecurity. One wonders if CEOs will soon have the security of enterprise storage on their minds, too, if IT leaders don’t address the interconnection of cybersecurity and cyber resilient storage. Enterprises are in desperate need to shore up their cyber protection on the storage front.
There’s no question that you should be using immutable snapshots (unalterable snaps of data) for reliable, rapid recovery of data. But there is one more piece of the cyber puzzle that is critical to have a well-rounded cyber resilience implementation – cyber detection. This capability has a twofold purpose. Cyber detection serves as an early warning system to help you protect the data. It can conveniently tie into data centre-wide security software, revealing what is being detected and seen from a cyber standpoint.
The second useful purpose is after a cyberattack has occurred. With cyber detection, you can get to a known good copy of data faster. It’s vital to have a clean copy because if you recover data that has hidden malware or ransomware in it, you are going down a self-defeating path. Malware and ransomware do not pound their chest like King Kong. They are much more surreptitious, lurking and hard to detect.
This is why you need machine learning-driven cyber detection to scan the data in primary storage and secondary storage for any corruption before you recover it. Other security scans that an enterprise does may not detect the malware or ransomware at all, even though it is hidden there. The most effective way to identify it and root it out is a cyber detection capability built into the primary storage system.
These advanced capabilities to secure data infrastructures are necessary to address a harsh reality. If an enterprise, or service provider, does not have cyber resilient storage, the damage that cyber criminals can do is significant, and it’s the equivalent to leaving a bank vault door open and unguarded. Storage of a company’s data, which is among its
most valuable assets, can no longer be considered separate from a comprehensive cybersecurity strategy.
3. What’s up with AIOps? Why it matters to enterprise storage.
Artificial Intelligence for IT Operations (AIOps) in enterprise storage is a key aspect to simplifying IT operations, reducing administrative overhead, and adding a predictive layer onto the data storage infrastructure. It's increasing in importance because of the industry shift toward a platform approach to enterprise storage.
AIOps supports scalable, multi-petabyte storage-as-a-service (STaaS) solutions, enabling enterprises to centralise operations and improve cost management. The beauty of AIOps is that the flexibility of capacity and workloads is much better managed.
Whether they call it "AIOps" yet or not, enterprises are seeking the IT “superpowers” of advanced predictive analytics, early issue detection, and proactive support, which are integral to enabling the storage-as-a-service experience. Indeed, this STaaS experience must be tailored to enterprise requirements and economics throughout the deployment lifecycle.
AIOps is an approach that combines autonomous automation with analytics and some form of artificial intelligence, such as machine learning, or better yet, deep learning, on a multi-layered technology platform. These capabilities enhance data storage with built-in intelligence that optimises application environments and performance over time, essentially delivering a zero-touch, set-it-and-forget-it experience. This software can dynamically adapt to changing applications, user and performance demands – without administrative overheads. It enables 100% SLA-based guarantees, predictive abilities, and optimal combinations of underlying media.
4. Rising costs, constraints, and environmental impact in data centres
IT leaders are dealing with the rising costs of energy, floor space, rack space, cooling, and operational resources in data centres, as well as environmental impact. They are being required to do more in less space. The expense to power a data centre, including storage arrays and servers, is only going up, affected by higher energy prices. IT budgets are being squeezed by escalating real estate costs. Last, but not least, the increasing need to dispose off old equipment and the rush to install new systems that produce more carbon emissions are having an environmental effect as well.
What should an IT manager who is overseeing or managing the storage infrastructure do in the face of these challenges? Where should the IT team focus to minimise the rising costs, make the most of space constraints, and responsibly reduce the environmental impact as much as possible?
One good option is to reduce the footprint that legacy storage arrays are taking up and seek double-digit energy efficiency improvements. This also results in a requirement for
fewer personnel to manage the storage system – it’s doing more with less. Getting more for storage while spending less, using less energy, and taking up less space in the data centre.
The environmental impact is reduced because you have opted for more efficient, green IT-optimising storage. There is less hardware to dispose of and you let off fewer carbon emissions than youprior to consolidating your storage solutions. It’s a way to evolve your data centre into a centre of ‘green IT’ excellence.
We call it E2 because it simultaneously decreases costs (an economic benefit), while reducing the environmental impacts effects (an environmental benefit).
5. Single storage operating system vs. Multiple operating systems
This one is a no-brainer. When you have to deal with multiple storage operating systems across a vendor’s portfolio, you are forced to deal with complexity and additional IT operational costs. It’s better to have a single operating system that works across storage systems, including primary storage, secondary storage, and hybrid multi-cloud environments. The compatibility, efficiency, and simplicity of a single OS makes the life of a storage admin much easier.
If your enterprise has too many storage arrays and you have to manage several different operating systems – even across a single vendor’s storage products – the experience takes up more time than necessary, and there is more risk of a mistake, a distraction, or a breakdown. Why do you want the headache?
Let’s say you have 20 or more arrays installed today across two data centres. Storage consolidation will enable you to condense those 20 arrays into just two arrays, at petabyte scale. It knocks out the complexity of multiple operating systems across three or more vendors, saving you time and money. Whether your enterprise takes an on-premises or a hybrid cloud approach, it is far better to only have to interact with the same operating system and maximise your efficiency learnings. Overall this contributes to a far better overall user experience.