OWASP’s 2025 update highlights new risks in LLM deployment, focusing on production security and agentic AI.
The 2025 edition of the OWASP Top 10 LLM Risk Categories reflects a significant update in the understanding of how generative AI systems are deployed in production environments. The update emphasizes that prompts should not be treated as secrets that can be protected through obscurity alone, and introduces new categories such as LLM08:2025 Vector and Embedding Weaknesses, addressing risks from the rapid adoption of Retrieval-Augmented Generation (RAG) systems. LLM04:2025 Data and Model Poisoning has been expanded to cover threats during pre-training, fine-tuning, and agentic processes, highlighting the broader threat landscape.
The framework now prioritizes the detection of model poisoning in live systems and introduces LLM06:2025 Excessive Agency, reflecting the significant adoption of agentic AI. This category is framed around the risk of granting LLMs too much autonomy without adequate oversight. For AI security teams, these updates mean that securing LLM applications requires an updated approach, including semantic attacks beyond simple injection, embedding integrity verification, and behavioral drift detection in production models. Traditional security testing tools need AI-native extensions for evaluating embedding alignment.
Source: giskard.ai