Saving SecOps teams with an observability pipeline

By Nick Heudecker, Senior Director at Cribl.

  • 3 years ago Posted in

Threats will keep coming and being on the back-foot doesn't work. Security is a marathon; teams need to be in it for the long-haul and require the right tools and training to stay alert and continue moving forward. At some point, security teams will crack. The struggle of information overload is being cited as a key stress factor for IT security teams, with 62% seeing it as an area that causes pain for their role. On top of this, the increasing complexity of distributed denial of service (DDoS) attacks, hybrid work environments, insider threats and the move to cloud-native applications deployed on containers add to the complexity SOC teams face. At the same time, with the increased layers of complexity being beyond the capabilities of traditional monitoring solutions they are struggling with the wrong tools for the job at hand. It’s a perfect storm for threats to make their way in.

The rise of dynamic observability

There is, however, some hope. During the last couple of years, there has been a shift in approach that looks to solve these issues: the move from static monitoring to dynamic observability. While monitoring focuses on the health of components, observability provides fine-grained visibility into why systems behave the way they do. Observability is the characteristic of software, infrastructure, and systems allowing questions about their behaviour to be asked and answered. It allows you to ask the ‘what ifs’ and learn more about the ‘unknown unknowns.’ Monitoring, on the other hand, forces predefined questions about systems into a set of dashboards that may or may not tell you what's going on in your environment.

Unlike monitoring though, observability isn't a thing you can buy off the shelf. No single tool will provide all the benefits of having an observable system. Instead, observable systems need to be built. This starts by embedding the concept into applications and infrastructure via events, logs, metrics, and traces. Next you take that data and combine it with change logs, IT service management data and network traffic to give teams a macro view while also enabling drilling down into micro details.

The right data to the right platform

Solving the complexity challenge is not the only area that is pushing the need for observability. It is also emerging as a valuable tool for security operations teams working cross-functionally. In a modern enterprise, SOC teams do not operate in silos. Instead, they are interacting with infrastructure, operations, and DevOps teams. Each group though has its own tooling and analytics platform, which make it impossible for SOC teams to get a holistic view of the entire IT ecosystem.

Moreover, the interaction between these teams often introduces friction around what various data sets mean or what a correct outcome even looks like. Observability helps solve these issues by delivering the right data to respective platforms.

Another challenge presented by instrumented systems is that delivering data to the right platforms becomes a challenge. This does not need to be the case. Using observability pipelines, security teams can decouple sources of data such as applications and infrastructure from destinations like log analytics and SIEM platforms. Adding in extra monitoring won’t solve this problem. The reality is that organisations are already heavily stocked with monitoring tools, with an average of 29 being in place. By abstracting data analysis and how data is used from how it is collected, provides teams with flexibility in how data is delivered. In addition, observability pipelines enable fine-grained optimisation of data sources via uses such as redaction, filtration, and overall reductions in data volumes.

The final element in achieving observability is exploring data. Having worked in the data and analytics space, I equate traditional monitoring to data warehousing. In both data warehousing and monitoring, you know what data you're ingesting and the reports or dashboards you're creating. On top of this, you have a collection of known questions over known data. While it is often expensive and inflexible, it's also dependable and well understood.

Observability, on the other hand, is more like a data lake. With a data lake, you don't know what questions you'll ask, but you fill the lake with data and organize it to prepare for future questions. If a data warehouse is for known questions over known data, a data lake is for unknown questions over unknown data. This means it can often be helpful to think of a data lake as a question development environment as you're creating the questions you want to ask at the same time, you're exploring the data. Unlike a conventional data lake which supports data scientists optimizing for SQL and Python, an observability data lake optimizes for search.

Use observability to get off the SecOps treadmill and get set for the long-stretch

With data volumes increasing at ever faster rates, security analysts are stuck on a treadmill that keeps getting faster. Already they are burdened with an overload of data to analyse and manage, yet still lacking all the data they need to get visibility into their environments. Monitoring tools may have offered a solution in the past, but now they too are being outpaced by changes in IT ecosystems as businesses move to being cloud-native or operating with a container-based infrastructure. Instead, a new approach is needed to tackle the complexity of current IT ecosystems.

Evolving systems to have observability built-in enables enterprises to better future proof systems when questions arrive and evolve. Security isn’t a game of catch-up, it’s a marathon. With an observability pipeline, businesses can slow the treadmill down and finally capture all the data they need and deliver it cleaned and formatted to the right tools.

By Barry O'Donnelll, Chief Operating Officer at TSG.
By Dr. Sven Krasser, Senior Vice President and Chief Scientist, CrowdStrike.
By Gareth Beanland, Infinidat.
By Stuart Green, Cloud Security Architect at Check Point Software Technologies.
The cloud is the backbone of digital cybersecurity. By Walter Heck, CTO HeleCloud
By Damien Brophy, Vice President EMEA at ThoughtSpot.
By Guido Grillenmeier, Chief Technologist, Semperis.