The OWASP Top 10 is a collection of the most common application security risks, based around community consensus, data, and common application security exploits. Each of the risks refers to one or more Common Weakness Enumeration (CWE) categories, such as Cross-Site Request Forgery or Improper Input Validation. It has been used as a de-facto standard for web application security verification since its inception, although the project encourages the adoption of formal standards, such as the OWASP Application Security Verification Standard (ASVS) and Security Assurance Maturity Model (SAMM).
Since its first release in January 2003, the OWASP Top 10 has influenced cybersecurity best practice. It has raised awareness around weaknesses such as SQL Injection and Cross-Site Scripting, which used to plague most web applications. The most famous early edition was the OWASP Top 10 2004. Every three or four years, a new version has been released. Since 2007, the OWASP Top 10 has been based upon data obtained from multiple sources, with one or two subjective additions that deal with contemporary application security issues.
Now it’s time for the OWASP Top 10 2024, and you can be a part of it!
Current Status of 2024 Edition
The OWASP Top 10 relies on data. Its widespread adoption is the result of our rigorous data collection, normalization, and analysis, which emphasizes real world risks over esoteric and difficult to exploit weaknesses.
In 2021, we obtained data from many different sources, including boutique consultancies, large multinational security consultancies, bug bounty programs, and automated tools (both human assisted and completely automated). The wide variety of data sources and methodologies – over 550,000 applications’ worth – behind the OWASP Top 10 have made it one of the world’s largest application security data sets.
Cross-Site Request Forgery (CSRF) was inserted by the authors in 2007 without a lot of backing data, since it affected nearly all applications. This made its inclusion slightly controversial. In the same vein, since 2017, we’ve run month-long industry surveys on OWASP’s social media channels to identify one or two subjective risks for each new release of the OWASP Top 10. As we are limited to just ten risks, we can’t include every single weakness, so running an industry survey helps build confidence that the injected risks are real and worthy of inclusion. In 2024, we are expecting the technical controls required by the EU Cyber Resilience Act (CRA) to feature heavily in the survey, but we will only know once the survey has been completed.
When will OWASP Top 10 2024 be released?
We are planning on releasing the OWASP Top 10 2024 at the OWASP Global AppSec San Francisco in late September 2024, although we may need to push the release back if we have insufficient data. We don’t yet have anywhere near the mass of data necessary for the new edition. We have optimized the weighting of various data methodologies, allowing us to achieve the same level of confidence with less data, but we are still far from comfortable with the current sample size.
There are six phases to the writing and release process we adopted in 2017: data collection, data normalization and analysis, writing, industry survey, review process including a public comment period of early exposure drafts, and finally translations. The whole process is documented on the OWASP Top 10 website (https://owasptopten.org/).
Currently, we are stuck in data collection. We need to move forward with the other phases, otherwise the OWASP Top 10 2024 will be very late indeed.
Contributing Data
You can get involved by submitting your data to the OWASP Top 10 project. We accept data from source code reviews, penetration tests, bug bounty programs, triaged tool output, and more. To normalize contribution format, we provide CSV and JSON templates (https://github.com/OWASP/Top10/tree/master/2024/Data). Your data is anonymized and mixed, so to prevent attribution to any one vendor, consultancy, or security researcher.
There are two types of datasets, which we recommend submitting separately. Humans Assisted By Tools (HaT) datasets are created by humans using tools to obtain or document the results. These are varied, with a low false positive rate, but with potentially esoteric results. Tools Assisted By Humans (TaH) datasets are output by tools doing the work, but triaged by humans. These frequently include the same 15-20 CWEs time and again, but with a higher false positive rate.
Human involvement is important. We generally do not accept untriaged tool output; if you have it, please let us know and we will see if it matches with our overall combined data sets. If there’s insufficient confidence that the output matches our other data, we won’t include it.
The following data elements are *required or optional:
Per DataSet:
- Contributor name (organization), or anonymous
- Contributor contact email
- Time period (2024, 2023, 2022, 2021)
- Number of applications tested (required)
- CWEs, with the number of applications in which each was found (required)
- Type of testing: TaH, HaT, or untriaged tool output
- Primary programming language
- Geographic region (global, North America, EU, Asia, other)
- Primary industry (multiple, financial, industrial, software)
- Whether or not data contains retests or the same applications multiple times (true/false)
If you can assist with data collection, please head on over to the OWASP Top 10 team website, and follow the instructions: https://owasptopten.org/owasp-top-ten-data-collection-is-open
We look forward to seeing your data in the OWASP Top 10. The more data we can get, the more accurate and useful the OWASP Top 10 is going to be!