OWASP Top 10 for Large Language Model Applications

The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs). The project provides a list of the top 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution, among others. The goal is to raise awareness of these vulnerabilities, suggest remediation strategies, and ultimately improve the security posture of LLM applications. You can read our group charter for more information

Review the official 1.1 release (Full Version or Short Slides) to understand work that has been done to date.

📢 New Document Release: Security & Governance Checklist

We’re excited to announce version 1.0 of our latest document: Security & Governance Checklist. This comprehensive guide is essential for a Chief Information Security Officer (CISO) managing the rollout of Gen AI technology in their organization.

🔗 Download the PDF here - also now available in French and Japanese

📢 New Website Launched: Check us out there as well

We have launched a new website to complement this one.

This initiative is community-driven and encourages participation and contributions from all interested parties.

New to LLM Application security? Check out our resources page to learn more.

Project Sponsorship

Learn how to become an OWASP LLM Project Sponsor/Donor.

We are just launching a new project sponsor program. The OWASP Top 10 for LLMs project is a community-driven effort open to anyone who wants to contribute. The project is a non-profit effort and sponsorship helps to ensure the project’s sucess by providing the resources to maximize the value communnity contributions bring to the overall project by helping to cover operations and outreach/education costs. In exchange, the project offers a number of benefits to recognize the company contributions.

Supporters

Sponsor Logos Comming soon.


OWASP Top 10 for Large Language Model Applications version 1.1

LLM01: Prompt Injection

Manipulating LLMs via crafted inputs can lead to unauthorized access, data breaches, and compromised decision-making.

LLM02: Insecure Output Handling

Neglecting to validate LLM outputs may lead to downstream security exploits, including code execution that compromises systems and exposes data.

LLM03: Training Data Poisoning

Tampered training data can impair LLM models leading to responses that may compromise security, accuracy, or ethical behavior.

LLM04: Model Denial of Service

Overloading LLMs with resource-heavy operations can cause service disruptions and increased costs.

LLM05: Supply Chain Vulnerabilities

Depending upon compromised components, services or datasets undermine system integrity, causing data breaches and system failures.

LLM06: Sensitive Information Disclosure

Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.

LLM07: Insecure Plugin Design

LLM plugins processing untrusted inputs and having insufficient access control risk severe exploits like remote code execution.

LLM08: Excessive Agency

Granting LLMs unchecked autonomy to take action can lead to unintended consequences, jeopardizing reliability, privacy, and trust.

LLM09: Overreliance

Failing to critically assess LLM outputs can lead to compromised decision making, security vulnerabilities, and legal liabilities.

LLM10: Model Theft

Unauthorized access to proprietary large language models risks theft, competitive advantage, and dissemination of sensitive information.