// owasp_hub / genai_security_project

OWASP GenAI Security

★ Flagship Project

The OWASP GenAI Security Project is the authoritative source for securing Large Language Models, Generative AI, and Agentic AI applications. It publishes the LLM Top 10, Agentic Top 10, quarterly threat landscapes, governance checklists, red teaming guides, and incident intelligence.

[ visit genai.owasp.org ↗ ]
// top_10_frameworks
Top 10 GenAI / LLM Apps2025

LLM01–LLM10: Prompt Injection, Sensitive Data Disclosure, Supply Chain, Data Poisoning, Output Handling, Excessive Agency, and more.

[ view in top 10 explorer → ]
Top 10 for Agentic Applications2026 · NEW

The first dedicated Top 10 for autonomous AI agents covering memory poisoning, privilege escalation, cascading failures, and multi-agent trust exploitation.

[ view official list ↗ ]

Resource Library

// 20 resources · genai.owasp.org
Guides & FrameworksFlagship

Securing Agentic Applications Guide 1.0

Comprehensive security guidance for teams designing, building, and deploying autonomous AI agent systems. Covers trust boundaries, tool authorization, and safe orchestration patterns.

[ open ↗ ]
Guides & Frameworks

A Practical Guide to Securing Agentic Applications

Hands-on implementation guidance for securing agentic AI applications, with concrete patterns for controlling agent autonomy, enforcing least privilege, and preventing prompt injection in multi-agent systems.

[ open ↗ ]
Guides & Frameworks

OWASP Gen AI Red Teaming Guide

Methodology and techniques for adversarially testing LLM and agentic AI applications. Covers jailbreaking, indirect prompt injection, model extraction, and evaluation framework selection.

[ open ↗ ]
Guides & Frameworks

Guide for Preparing and Responding to Deepfake Events

Organizational playbook for detecting, responding to, and recovering from deepfake incidents covering synthetic media used in social engineering, fraud, and disinformation campaigns.

[ open ↗ ]
Guides & FrameworksNEW

Agentic AI Threats and Mitigations (Taxonomy)

Structured taxonomy of threat categories specific to autonomous AI agents: memory manipulation, tool abuse, privilege escalation, cascading failures, and cross-agent trust exploitation.

[ open ↗ ]
Guides & FrameworksNEW

Agentic AI Threat Modeling Framework

Structured framework for performing threat modeling on agentic AI systems. Identifies trust zones, attack surfaces unique to agent orchestration, and mitigation controls.

[ open ↗ ]
Guides & Frameworks2025

State of Agentic AI Security and Governance 1.0

Industry-wide assessment of the current state of security and governance maturity for agentic AI deployments, including survey data, risk landscape analysis, and recommended governance frameworks.

[ open ↗ ]
Checklistsv1.1

LLM Cybersecurity and Governance Checklist v1.1

Practical checklist for security and compliance teams evaluating LLM deployments. Covers data governance, model provenance, access controls, monitoring, and regulatory alignment.

[ open ↗ ]
Checklists

Vendor Evaluation Criteria for AI Red Teaming v1.0

Objective criteria for evaluating and comparing AI red teaming providers and tooling. Helps organizations select vendors with rigorous methodology, coverage depth, and transparent reporting.

[ open ↗ ]
Checklists

Threat Defense COMPASS 1.0 + RunBook

Decision framework mapping GenAI threat categories to defense controls. The RunBook provides step-by-step response playbooks for each threat class in LLM and agentic deployments.

[ open ↗ ]
Quarterly ReportsQ2 2026

AI Security Solutions Landscape Agentic AI (Q2 2026)

Quarterly mapping of the AI security vendor landscape specifically for agentic AI applications. Categorizes tools by agent monitoring, runtime policy enforcement, and multi-agent trust verification.

[ open ↗ ]
Quarterly ReportsQ2 2026

AI Security Solutions Landscape LLM & Gen AI Apps (Q2 2026)

Latest quarterly report mapping security tooling and vendors for LLM-based applications. Covers DAST for LLMs, prompt firewall solutions, output validation tools, and governance platforms.

[ open ↗ ]
Quarterly ReportsQ3 2025

AI Security Solutions Landscape Agentic AI (Q3 2025)

Q3 2025 vendor landscape report for agentic AI security tooling. Documents the rapid maturation of agent observability, sandboxing, and authorization control products.

[ open ↗ ]
Quarterly ReportsQ1 2025

LLM & Generative AI Security Solutions Landscape (Q1 2025)

Early 2025 landscape snapshot of security tools for LLM applications, establishing the baseline taxonomy of solution categories still used in subsequent quarterly reports.

[ open ↗ ]
Threat IntelligenceJul 2025

Gen AI Incident & Exploit Round-up Q2 2025

July 2025 compilation of real-world GenAI security incidents, prompt injection exploits, data leakage events, and agentic AI misuse cases observed in production systems.

[ open ↗ ]
Threat IntelligenceMar 2025

Gen AI Incident & Exploit Round-up Jan–Feb 2025

Early 2025 incident roundup covering the first wave of production LLM exploits, including indirect prompt injection via RAG documents and model inversion attacks.

[ open ↗ ]
Tools & Reference

LLM & AI Security Glossary

Comprehensive reference of terms and definitions used across LLM security, generative AI, and agentic systems from prompt injection and jailbreaking to RLHF, RAG, and model extraction.

[ open ↗ ]
Tools & Reference

OWASP LLM Exploit Generation v1.0

Technical reference documenting LLM-specific exploit patterns and generation techniques, intended to help security teams build test cases and red-team LLM deployments.

[ open ↗ ]
Tools & ReferenceOpen Source

OWASP AIBOM Generator

Open-source tool for generating AI Bill of Materials (AIBOM) a comprehensive inventory of an AI system's models, datasets, dependencies, and training provenance for supply chain transparency.

[ open ↗ ]
Tools & ReferenceCTF

OWASP FinBot Capture The Flag

Reference agentic AI application built for hands-on security training. Used in CTF competitions to practice exploiting prompt injection, excessive agency, and authorization flaws in agentic systems.

[ open ↗ ]