OWASP LLM Top 10 (2025 Version)
The definitive list of the ten most critical security risks in large language model applications, updated for 2025 to reflect agentic AI and multi-model deployments.
The OWASP Top 10 for Large Language Model Applications (2025 version, v2.0) catalogues the ten most critical security vulnerabilities in systems built on LLMs. The 2025 list includes: LLM01 Prompt Injection (direct and indirect), LLM02 Sensitive Information Disclosure, LLM03 Supply Chain Vulnerabilities, LLM04 Data and Model Poisoning, LLM05 Improper Output Handling, LLM06 Excessive Agency, LLM07 System Prompt Leakage, LLM08 Vector and Embedding Weaknesses, LLM09 Misinformation, and LLM10 Unbounded Consumption. Compared to the 2023 version, notable changes include LLM06 Excessive Agency elevated in prominence due to agentic architectures, and LLM10 Unbounded Consumption replacing Model Theft to address resource exhaustion in multi-tenant and API-exposed deployments.
LLM01 Prompt Injection is the primary attack vector: adversaries craft inputs that override system prompts, cause the LLM to ignore safety instructions, or manipulate it into executing unintended actions. Direct injection occurs via user input; indirect injection occurs when the LLM processes external content (web pages, documents, tool outputs) containing injected instructions. Engineering mitigations include input validation and sanitization, privilege separation between instruction context and data context, output parsing that does not treat LLM outputs as trusted code, and structured output schemas (JSON mode, constrained decoding) that limit the attack surface. LLM06 Excessive Agency — where LLMs are granted excessive permissions, capabilities, or autonomy — requires implementing least-privilege tool access, human-in-the-loop confirmation for high-impact actions, and rate limiting on agentic tool use.
LLM04 Data and Model Poisoning addresses attacks on training data or fine-tuning datasets that cause the model to learn incorrect associations, backdoors, or biased behaviors. Engineering controls include training data provenance tracking, statistical anomaly detection on training sets, and differential privacy techniques for fine-tuning. LLM08 Vector and Embedding Weaknesses specifically targets RAG (Retrieval-Augmented Generation) architectures: adversarial documents can be embedded in knowledge stores to manipulate retrieval and generation. Mitigations require document sanitization before embedding, access controls on retrieval scope, and output monitoring for signs of retrieved-document manipulation. LLM10 Unbounded Consumption requires rate limiting, token budget enforcement, timeout policies, and cost alerting at the inference layer to prevent DoS via prompt inflation.
We perform OWASP LLM Top 10 threat modeling for every LLM application engagement, mapping each risk to implemented controls: prompt injection isolation via structured execution contexts, least-privilege tool permission matrices for agentic systems, RAG pipeline document sanitization, and inference-layer rate limiting and cost budget enforcement. Our AI security review process includes adversarial prompt testing against each of the ten categories.
Compliance-Native Architecture Guide
Design principles and a structured checklist for building software that is compliant by default — not compliant by retrofit. Covers data architecture, access controls, audit trails, and vendor due diligence.