Sysdig launches AI Workload Security

New capability helps companies gain visibility into their AI workloads, identify active risk and suspicious activity in real time, and ensure compliance with emerging AI guidelines.

  • 2 weeks ago Posted in

Sysdig has launched AI Workload Security to identify and manage active risk associated with AI environments. The newest addition to the company’s cloud-native application protection platform (CNAPP) is designed to help security teams see and understand their AI environments, identify suspicious activity on workloads that contain AI packages, and fix issues fast ahead of imminent regulation.

“The addition of AI Workload Security to the Sysdig CNAPP comes in response to widespread demand for a solution that empowers the secure adoption of AI so companies can harness its power and accelerate business. With AI Workload Security, organizations can understand their AI infrastructure and identify active risks such as workloads containing in-use AI packages, that are publicly exposed, and have exploitable vulnerabilities. AI workloads are a prime target of attack for bad actors, and AI Workload Security allows defenders to detect suspicious activity within these workloads and address the most imminent threats to their AI models and training data,” said Knox Anderson, SVP of Product Management at Sysdig.

Kubernetes has become the deployment platform of choice for AI. However, securing data and mitigating active risk in containerized workloads are inherently difficult due to their ephemerality. Understanding malicious activities and runtime events that may lead to a breach of sensitive training data requires a real-time solution with runtime visibility. The Sysdig CNAPP is rooted in open source Falco, the standard for threat detection in the cloud. It is designed for cloud-native runtime security, like Kubernetes clusters, regardless of whether those workloads are in the cloud or on-premises.

With the introduction of real-time AI Workload Security, Sysdig helps companies immediately identify and prioritize workloads in their environment with leading AI engines and software packages, such as OpenAI, Hugging Face, Tensorflow, and Anthropic. By understanding where AI workloads are running, Sysdig enables organizations to manage and control their AI usage — whether that usage is official or deployed without proper approval. Sysdig also simplifies triage and reduces response times by fully integrating real-time AI Workload Security with the company’s unified risk findings feature, providing security teams with a single view of all correlated risks and events to provide a more efficient workflow to prioritize, investigate, and remediate Active AI Risks.

Widespread AI Adoption Brings Growing Public Exposure

Of all GenAI workloads currently deployed, Sysdig found that 34% are publicly exposed. Public exposure, which refers to a workload’s accessibility from the internet or another untrusted network without appropriate security measures in place, puts the sensitive data leveraged by GenAI models in urgent danger. In addition to increasing the risk of security breaches and data leaks, public exposure also opens the door for regulatory compliance challenges.

Today’s announcement is timely given the increasingly rapid pursuit of AI deployment, as well as growing concern with the security of these models and the data used to train them. A recent Cloud Security Alliance survey concluded that over half of organizations, 55%, are planning to implement GenAI solutions this year. Sysdig also found that, since December, the deployment of OpenAI packages has nearly tripled. Of the GenAI packages currently deployed, OpenAI makes up 28%, followed by Hugging Face’s Transformers at 19%, Natural Language Toolkit (NLTK) at 18%, TensorFlow at 11%, and Anthropic at less than 1%.

The introduction of AI Workload Security also aligns with forthcoming guidelines and increasing pressures to audit and regulate AI, as proposed by the Biden Administration’s October 2023 Executive Order and following recommendations from the National Telecommunications and Information Administration (NTIA) in March 2024. By highlighting public exposure, exploitable vulnerabilities, and runtime events, Sysdig AI Workload Security also helps organizations across industries fix issues fast ahead of this imminent AI legislation.

“Without adequate runtime insights, AI workloads expose organizations to undue risk. Threat actors can exploit vulnerabilities in running packages to access sensitive training data or modify AI requests and responses,” continued Anderson. “Organizations must establish enhanced security controls and runtime detections tailored to these unique challenges, and Sysdig helps customers address these ethical concerns and blind spots so they can reap all the benefits of efficiency and speed that generative AI offers.”

Over 40% of OutSystems developers use AI to guide them through the software development life cycle.
Extends the Dynatrace platform’s existing security capabilities to enable customers to drive...
Discover Cloudera's AI-driven SQL, BI, and ML Assistants.
New ServiceNow Now Assist and Microsoft Copilot integration brings the power of two generative AI...
New Value Generation partnership initiative focused on delivering greater client productivity and...
Dell APEX Cloud Platform for Red Hat OpenShift empowers organisations to implement impactful AI...
Boomi has unveiled its vision for the company’s future, along with strategic acquisitions and key...
New generative AI capabilities in Now Assist supercharge productivity, accelerate cost savings, and...