F5 has introduced updated threat intelligence resources focused on application and API security. These tools are designed to help enterprise security leaders assess and compare the risk profiles of AI models with greater consistency.
Developed by F5 Labs, the Comprehensive AI Security Index (CASI) and Agentic Resistance Score (ARS) provide metrics for evaluating AI security. The leaderboards are updated monthly and draw on a vulnerability library that is regularly expanded with more than 10,000 new attack prompts each month.
Originating from F5’s acquisition of CalypsoAI, the resources are intended to support organisations in assessing and comparing AI models using real-time attack intelligence from F5 Labs.
As AI becomes more widely integrated into business operations, demand for structured validation methods has increased. F5’s leaderboards are designed to help address this by providing metrics such as Risk-to-Performance Ratio and Cost of Security, which reflect trade-offs between performance, cost and security.
CASI provides measures of average model performance under standard conditions, while ARS evaluates resilience against sustained, adaptive attacks. It assesses factors including attacker sophistication, defensive endurance, and counter-intelligence signals within AI systems.
F5’s AI Guardrails and AI Red Team tools complement these resources. AI Guardrails are designed to control interactions between AI systems and users, while AI Red Team conducts testing against simulated multi-step attacks.
CASI and ARS are updated on a monthly basis, providing ongoing benchmarking data for organisations evaluating AI models. The tools also support governance of AI system behaviour and data access.
F5’s broader security research work focuses on identifying and analysing emerging threat methodologies and sharing findings with the wider security community as AI-related cybersecurity challenges evolve.