Cautious approach to AI integration in cybersecurity

The cautious response to AI in cybersecurity highlights privacy and oversight concerns despite potential benefits.

In the realm of cybersecurity, teams are exercising caution when it comes to integrating artificial intelligence (AI), despite considerable industry buzz and the push from business leaders for accelerated adoption. This sentiment emerges from a recent survey conducted by ISC2, which highlights the hesitancy rooted in concerns over privacy and potential unintended risks.

While heralded as a transformative force in security operations, AI tools have only been integrated by a scant proportion of practitioners into their day-to-day work environments. Chief Information Security Officers (CISOs) predominantly harbor reservations, attributing them to privacy, oversight, and the hazards of rapid transition. The survey, encompassing over 1,000 cybersecurity experts, reveals that a mere 30% of teams actively use AI tools, and 42% are still on the fence, evaluating their options. Notably, 10% confessed to having zero intentions of integrating AI whatsoever.

Adoption of AI is comparably more advanced in industrial sectors (38%), IT services (36%), and professional services (34%). Larger enterprises with over 10,000 staff have wider adoption of AI, with 37% employing AI tools actively. Conversely, smaller firms exhibit minimal AI uptake, with just 20% leveraging these technologies. Among firms with less than 99 staff, 23% have no plans to evaluate AI security tools.

Andy Ward, SVP International at Absolute Security commented: “There’s real enthusiasm for the potential of AI in cybersecurity, but also a growing recognition that the risks are escalating just as fast. Our research shows that over a third (34%) of CISOs have already banned certain AI tools like DeepSeek entirely, driven by fears of privacy breaches and loss of control."

Despite the apparent caution, AI implementation showcases notable benefits where employed. An encouraging 70% of users observed enhancements in their teams' overall effectiveness. Improvements span network monitoring and intrusion detection (60%), endpoint protection and response (56%), vulnerability management (50%), threat modelling (45%), and security testing (43%).

Forecasts regarding AI's impact on hiring practices are mixed. Over half of cybersecurity professionals anticipate a downsizing in entry-level roles due to task automation. Yet, a hopeful 31% foresee AI ushering new opportunities or skill set demands, thereby potentially balancing some of the anticipated role reductions. Interestingly, 44% admitted their hiring strategies remain unaffected but acknowledged an active reassessment of roles and skills needed to navigate AI technologies effectively.

WSO2 unveils a fresh focus on supporting agentic enterprises, aiming to strengthen AI deployment...
Samsung demonstrates multi-cell network validation using NVIDIA’s computing platform,...
The latest OSSRA report reveals rising challenges in AI-driven open source development,...
Alteryx One aims to enable enterprises to scale AI and automation by providing governed, repeatable...
A new WBBA report highlights the untapped potential of AI in telecoms beyond internal efficiency,...
Sophos’ latest report highlights the rise of identity-related cyberattacks, emphasising the need...
The new global Code of Professional Conduct sets ethical standards for cybersecurity practitioners...
Exploring the impact of AI in telecoms, Colt's report underlines the necessity for a people-first...