OWASP GenAI Security
★ Flagship ProjectThe OWASP GenAI Security Project is the authoritative source for securing Large Language Models, Generative AI, and Agentic AI applications. It publishes the LLM Top 10, Agentic Top 10, quarterly threat landscapes, governance checklists, red teaming guides, and incident intelligence.
[ visit genai.owasp.org ↗ ]LLM01–LLM10: Prompt Injection, Sensitive Data Disclosure, Supply Chain, Data Poisoning, Output Handling, Excessive Agency, and more.
[ view in top 10 explorer → ]The first dedicated Top 10 for autonomous AI agents covering memory poisoning, privilege escalation, cascading failures, and multi-agent trust exploitation.
[ view official list ↗ ]Resource Library
Securing Agentic Applications Guide 1.0
Comprehensive security guidance for teams designing, building, and deploying autonomous AI agent systems. Covers trust boundaries, tool authorization, and safe orchestration patterns.
A Practical Guide to Securing Agentic Applications
Hands-on implementation guidance for securing agentic AI applications, with concrete patterns for controlling agent autonomy, enforcing least privilege, and preventing prompt injection in multi-agent systems.
OWASP Gen AI Red Teaming Guide
Methodology and techniques for adversarially testing LLM and agentic AI applications. Covers jailbreaking, indirect prompt injection, model extraction, and evaluation framework selection.
Guide for Preparing and Responding to Deepfake Events
Organizational playbook for detecting, responding to, and recovering from deepfake incidents covering synthetic media used in social engineering, fraud, and disinformation campaigns.
Agentic AI Threats and Mitigations (Taxonomy)
Structured taxonomy of threat categories specific to autonomous AI agents: memory manipulation, tool abuse, privilege escalation, cascading failures, and cross-agent trust exploitation.
Agentic AI Threat Modeling Framework
Structured framework for performing threat modeling on agentic AI systems. Identifies trust zones, attack surfaces unique to agent orchestration, and mitigation controls.
State of Agentic AI Security and Governance 1.0
Industry-wide assessment of the current state of security and governance maturity for agentic AI deployments, including survey data, risk landscape analysis, and recommended governance frameworks.
LLM Cybersecurity and Governance Checklist v1.1
Practical checklist for security and compliance teams evaluating LLM deployments. Covers data governance, model provenance, access controls, monitoring, and regulatory alignment.
Vendor Evaluation Criteria for AI Red Teaming v1.0
Objective criteria for evaluating and comparing AI red teaming providers and tooling. Helps organizations select vendors with rigorous methodology, coverage depth, and transparent reporting.
Threat Defense COMPASS 1.0 + RunBook
Decision framework mapping GenAI threat categories to defense controls. The RunBook provides step-by-step response playbooks for each threat class in LLM and agentic deployments.
AI Security Solutions Landscape Agentic AI (Q2 2026)
Quarterly mapping of the AI security vendor landscape specifically for agentic AI applications. Categorizes tools by agent monitoring, runtime policy enforcement, and multi-agent trust verification.
AI Security Solutions Landscape LLM & Gen AI Apps (Q2 2026)
Latest quarterly report mapping security tooling and vendors for LLM-based applications. Covers DAST for LLMs, prompt firewall solutions, output validation tools, and governance platforms.
AI Security Solutions Landscape Agentic AI (Q3 2025)
Q3 2025 vendor landscape report for agentic AI security tooling. Documents the rapid maturation of agent observability, sandboxing, and authorization control products.
LLM & Generative AI Security Solutions Landscape (Q1 2025)
Early 2025 landscape snapshot of security tools for LLM applications, establishing the baseline taxonomy of solution categories still used in subsequent quarterly reports.
Gen AI Incident & Exploit Round-up Q2 2025
July 2025 compilation of real-world GenAI security incidents, prompt injection exploits, data leakage events, and agentic AI misuse cases observed in production systems.
Gen AI Incident & Exploit Round-up Jan–Feb 2025
Early 2025 incident roundup covering the first wave of production LLM exploits, including indirect prompt injection via RAG documents and model inversion attacks.
LLM & AI Security Glossary
Comprehensive reference of terms and definitions used across LLM security, generative AI, and agentic systems from prompt injection and jailbreaking to RLHF, RAG, and model extraction.
OWASP LLM Exploit Generation v1.0
Technical reference documenting LLM-specific exploit patterns and generation techniques, intended to help security teams build test cases and red-team LLM deployments.
OWASP AIBOM Generator
Open-source tool for generating AI Bill of Materials (AIBOM) a comprehensive inventory of an AI system's models, datasets, dependencies, and training provenance for supply chain transparency.
OWASP FinBot Capture The Flag
Reference agentic AI application built for hands-on security training. Used in CTF competitions to practice exploiting prompt injection, excessive agency, and authorization flaws in agentic systems.
Working Groups
5 active research initiatives driving the GenAI security agenda.
Agentic Security Initiative
Focuses on the unique security challenges of autonomous AI agents: multi-agent trust, tool authorization, memory integrity, and real-world exploitation patterns.
[ join initiative ↗ ]Red Teaming Initiative
Develops standardized AI red teaming guidelines, evaluation criteria for red teaming providers, and adversarial testing playbooks for LLM and agentic systems.
[ join initiative ↗ ]AI Threat Intelligence Initiative
Tracks and publishes real-world GenAI security incidents, exploit techniques, and emerging attack patterns to provide the community with actionable threat intelligence.
[ join initiative ↗ ]Secure AI Adoption & Governance
Builds Center of Excellence (CoE) frameworks, governance checklists, and compliance guidance to help organizations adopt LLM and agentic AI securely at enterprise scale.
[ join initiative ↗ ]AI Security Solution Landscape
Produces quarterly reports mapping the commercial and open-source AI security tooling landscape, helping practitioners evaluate and select solutions for their GenAI stack.
[ join initiative ↗ ]