OWASP OSS Risk 1: Known Vulnerabilities
Known vulnerabilities are a well-understood software risk…but managing and prioritizing them is anything but simple. Learn about key considerations when building a program to detect and remediate CVEs.
Known vulnerabilities are a well-understood software risk…but managing and prioritizing them is anything but simple. Learn about key considerations when building a program to detect and remediate CVEs.
Known vulnerabilities are a well-understood software risk…but managing and prioritizing them is anything but simple. Learn about key considerations when building a program to detect and remediate CVEs.
Known vulnerabilities are a well-understood software risk…but managing and prioritizing them is anything but simple. Learn about key considerations when building a program to detect and remediate CVEs.
Known vulnerabilities are a well-understood software risk…but managing and prioritizing them is anything but simple. Learn about key considerations when building a program to detect and remediate CVEs.
This article is part of a 10-part series on the Top 10 OSS Risks:
- OSS-RISK-1 Known Vulnerabilities (this article)
- OSS-RISK-2 Compromise of Legitimate Package (coming soon)
- OSS-RISK-3 Name Confusion Attacks (coming soon)
- OSS-RISK-4 Unmaintained Software (coming soon)
- OSS-RISK-5 Outdated Software (coming soon)
- OSS-RISK-6 Untracked Dependencies (coming soon)
- OSS-RISK-7 License Risk (coming soon)
- OSS-RISK-8 Immature Software (coming soon)
- OSS-RISK-9 Unapproved Change (Mutable) (coming soon)
- OSS-RISK-10 Under/Over-Sized Dependency (coming soon)
The Top 10 Risks for Open Source, developed by the Station 9 research team at Endor Labs, is an OWASP incubator project. A set of critical risks to applications leveraging open source software (OSS) dependencies, the list includes two categories of risks:
- Security risks can result in the compromise of system or data confidentiality, integrity, or availability.
- Operational risks can endanger software reliability or increase the efforts and investments required to develop, maintain, or operate a software solution.
In this 10-part series we explore each risk in depth, including the relevance of each risk and how you can factor them into your security programs. We begin with the most well-known OSS risk: Known vulnerabilities.
What is a Known Vulnerability?
Risk Category: Security
Known vulnerabilities are security bugs introduced by accident, which ended up in releases distributed to downstream users and whose presence was publicly disclosed – hopefully in a responsible way. This can happen to OSS packages as well as private packages.
The most common way to publicly disclose vulnerabilities is via the Common Vulnerabilities and Exposures (CVE) program. When a vulnerability is discovered, a CVE identifier is assigned by a CVE Numbering Authority (CNA) based on severity, impact, and exploitability criteria. After assignment, the organization responsible for the CVE (typically the CNA or the reporting organization) decides when to publish the CVE, making it publicly accessible in the CVE List or another relevant database, following responsible disclosure principles. At this time, a patch (upgrade) to fix the vulnerability may or may not be available. Likewise, an exploit (example code for how to exploit the vulnerability) may or may not already be available as well.
Research shows that, on average, an exploit is published 37 days after the patch is released. However, 14% of exploits are published before the patch is released and 80% of public exploits are published before the CVEs are published. On average, an exploit is published 23 days before the CVE is published.
The moral of the story is that you are behind from the start and you should always patch your dependencies as soon as possible once a fix is released.
What Happens if I Don’t Upgrade?
Let’s take a look at a real-world example: Early March 2017, security researchers discovered a flaw in the Jakarta Multipart parser of the open source web application framework Apache Struts that could allow remote code execution via crafted content types. Doing what security researchers do, they created an exploit demonstrating how the vulnerability could be exploited and notified the maintainers on March 6, 2017.
A patch was quickly released the next day, on March 7, and the corresponding CVE-2017-5638 was published 4 days later on March 11. The CVE was given the highest risk score possible (10/10) and all users of the framework were urged to update immediately.
Spoiler alert: One of the users who did not update all deployments of Apache Struts was the American credit bureau Equifax.
Over two months later, on May 13, 2017, Equifax had still not taken sufficient action and hackers were able to use the above-mentioned exploit to gain access to the internal servers on the Equifax corporate network. Private records of 145.5 million Americans along with 15.2 million British citizens and about 19,000 Canadian citizens were compromised in the breach, making it one of the largest cyber crimes related to identity theft. The activities went on for 76 days until July 29, 2017 when Equifax finally discovered the breach.
The security risk of using a package with a known vulnerability is pretty clear: It may be exploitable in the context of the downstream software, which could compromise your business or your customers’ business through a breach - and cost you over half a billion dollars.
Managing Known Vulnerabilities
On paper, discovering known vulnerabilities is pretty straightforward. We start with a list of vulnerabilities that contains affected versions and, if applicable, fixed versions. Then we compare the list to the package names and versions in our own software to see if we’re exposed to any of them. To an extent it is as simple as it sounds, but there are few details to consider:
- Vulnerability Data: Where does the list of vulnerabilities come from? How confident are we in the data? Does it cover all ecosystems relevant for me, e.g. Python, Java and C#? How often is it updated?
- Our Data: How do we get the list of package names and versions in our own software (a.k.a. our “Software Bill of Materials” or SBOM)? How do we know if a given package is actually used? Is the version in our list up-to-date? Are any packages missing from the list? How often is the inventory updated?
- Potential for Exposure: Even if we trust the vulnerability data and that a given package is actively used as part of our software, how do we know if the code in question is reachable from our code?
The answers to these questions become paramount once you're looking at millions of vulnerabilities and need to figure out where to start and how to define a process to stay on top of remediation.
Reliability of Vulnerability Data: NVD vs GHSA vs OSV
There are several options for known vulnerability data sources. When selecting a Software Composition Analysis (SCA) tool to scan for known vulnerabilities and other risks, it is important to know where that tool gets its information since, as we know, garbage in means garbage out. We’ll look at three of the most prominent standards: NVD, GitHub Advisories, and OSV.
The National Vulnerability Database (NVD) is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). However, the different naming schemes (CPE vs. ecosystem specific schemes) and dictionaries make mapping hard. Additionally, there have been some recent developments with the NVD that make it inadequate as a standalone database. The TL;DR is that the NVD has suspended adding CVSS scores and CPE-matches to CVEs, meaning the CVE entries do not contain severity information and any metadata around what software is actually affected.
The GitHub Advisory Database (GHSA) is a centralized repository of security advisories and vulnerability information for open source projects hosted on GitHub. It also allows you to submit a vulnerability without a CVE and some projects choose not to use the CVE process. GHSA serves as a comprehensive source of information about security vulnerabilities affecting software dependencies used in GitHub repositories. One of the main advantages in comparison to NVD is that GHSAs use the same notation and dictionaries to identify open source packages as the respective package managers (while CPEs differ significantly and cannot be mapped as easily).
The Open Source Vulnerabilities (OSV) is an open source aggregation database created in 2001 by Google and maintained in collaboration with the Open Source Security Foundation (OpenSSF). It pulls vulnerability data from over 13 different sources that cover over 20 ecosystems, including NVD, GHSA, OSS-Fuzz, MIT, as well as ecosystem specific advisory databases such as the Go Vulnerability Database, the PyPI Advisory Database, the Rust Advisory Database, and many more.
Endor Labs Vulnerability Database
The Endor Labs Vulnerability Database is based on NVD, GHSA, and OSV data along with a manually-annotated, function-level database for vulnerabilities going back to 2014, that can be used to determine if the vulnerable function (as opposed to the vulnerable dependency) is reachable (more on that below). This database is updated every 12 hours and analytics are re-run automatically for every project at least once every 24 hours (or sooner based on project configuration), which means that new vulnerabilities are automatically reported, without customers needing to rescan their projects.
Confidence in Your Own Software Bill of Materials
Traditional SCA tools look at manifest files to find out what dependencies a project has, e.g. OWASP Dependency Check. Think of a manifest file like a shopping list that tells the tool what external pieces of code the project uses. The tool then checks this list against a database of known vulnerabilities to see if there are any matches. However, sometimes it says there's a problem when there isn't really one. This is because it might flag code as vulnerable even if the project never actually uses that risky part of the code. This approach can also be problematic if manifest files are incomplete, a phenomenon we call “phantom dependencies”.
There's another way to find vulnerabilities that might work better, called code-centric approaches. Instead of just looking at a list, these methods dive into the actual code to see if the parts with known vulnerabilities are really used in ways that could cause problems. To do this, they use two main program analysis techniques: static and dynamic analysis. Each has its own way of exploring the code to see if the dangerous parts can actually cause harm in the specific way the application uses them.
Dynamic analysis watches the application in action. It's like following someone around to see if they'll walk into a dangerous area. This method runs the application and observes how it behaves, looking for paths that could trigger the vulnerable parts of the code. If the application never goes near those dangerous paths during the test, dynamic analysis will note that those vulnerabilities aren't a real concern in the way the application is used. The biggest drawback of dynamic analyses is that it is difficult to run an application such that all paths are explored.
Static analysis, on the other hand, is like having a superpower that lets you see through the code without running it. It checks the entire application's code to find paths that could lead to the vulnerable parts. This method doesn't need the application to be running to work. It's like reading a map and marking the roads that lead to a place you want to avoid, making sure you don't accidentally go there.
The use of static analysis to find paths to vulnerable code has been around in academic research (e.g. Detection, assessment and mitigation of vulnerabilities in open source dependencies and Prazi: From Package-based to Call-based Dependency Networks). Those ideas have been productized by Endor Labs, scaling the approach to thousands of libraries developed in many different programming languages.
Potential for Exposure: CVSS, EPSS, KEV, SVCC, Reachability
When it comes to potential for exposure, there’s no one way to assess whether a known vulnerability will put your organization at risk. Rather, here are five methods that you can use together to make informed decisions: CVSS, EPSS, KEV, SSVC, and Reachability Analysis.
Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities. When a CVE becomes enlisted by the NVD, it is assigned a CVSS score that is meant to reflect criticality. Critics of CVSS point out that there’s incentive for the reporting party to exaggerate or downplay the true criticality, so using CVSS alone could lead you to prioritize fixing the wrong risks.
Exploit Prediction Scoring System (EPSS) is a data-driven method to estimate the likelihood (probability) that a software vulnerability will be exploited in the wild. Many organizations use a combination of CVSS and EPSS to prioritize remediation. For example, a critical vulnerability with a low EPSS score might not be worth fixing, whereas a medium severity with high EPSS is of higher priority.
Known Exploited Vulnerabilities (KEV) Catalog, maintained by CISA, is a database of vulnerabilities that have actually been exploited in the wild. The focus on real-world exploitation data makes the KEV an additional useful source of data on top of CVSS. But this approach also has problems. The KEV is basically just a list of CVEs meeting its criteria and provides little additional information. Nevertheless, it is valuable as an additional layer of intelligence to drive vulnerability remediation.
Stakeholder-Specific Vulnerability Categorization (SSVC) was created by Carnegie Mellon University's Software Engineering Institute (SEI) and CISA. It is a vulnerability analysis methodology that accounts for a vulnerability's exploitation status, impacts to safety, and prevalence of the affected product in a singular system. Put simply, SSVC is a structured decision tree tool for determining how to evaluate and respond to vulnerabilities. It can be used to codify the level of risk your organization is comfortable with, and which data sources you’ll use to prioritize risks.
Reachability Analysis is the ability for an SCA tool to determine whether the vulnerable code can be executed in the context of the dependent software. Endor Labs uses call graph analysis to determine function-level reachability, i.e. if the vulnerable function can be executed in the context of the dependent project. While no SCA tool can completely eliminate false positives, function-level reachability is the closest we can get to certainty of a false positive and therefore provides tremendous value. When combined with CVSS, EPSS, and various data sources, reachability reduces the need for manual research and eliminates most SCA tool noise.
Read more about each of these techniques in How Should I Prioritize Software Vulnerabilities?
Prioritizing Known Vulnerability Remediation
You’ve addressed the issues of reliability, relevance, and exposure. Excellent, now what?
While the number of dependencies varies (based on programming language, complexity of the software/application, etc.), the average application has several hundreds of dependencies. And unless you founded your company last week, you probably have quite a bit of security debt. So you need to find a way to prioritize your team’s time around what matters most.
After identifying all potential vulnerabilities using program analysis (ensuring no blind spots), our customers see substantial noise reduction when prioritizing initial remediation using five filters. In practice they’re applied all at once, but we’ll present them in a logical order for the sake of this blog:
- Affected function is reachable
- Fix available
- In production code (not test code)
- Probability of exploit (high EPSS)
- High and severe CVSS
Filter 1: Affected Function is Reachable
When an OSS package is used in your application code, you are typically just using a subset of the package rather than the entire package. When the vulnerable code isn’t actually in use, there’s little point in waking people up to chase it down. With this filter you remove vulnerabilities that aren’t reachable at the function level.
Filter 2: Fix Available
Next, you can prioritize based on whether there’s a fix (patch) available. That is not to say that you shouldn’t try to fix unpatched vulnerabilities, but that effort involves engineering resources or partnering with OSS maintainers. Vulnerabilities that have patches are easier to remediate, reducing your overall risk faster.
Filter 3: In Production Code
If the vulnerability is not in production code (which we define as everything that runs to support business processes, internet-facing or not), then the probability that it is actually exploitable is somewhere around zero. Ideally you might want to remove the vulnerability from development and test dependencies, but it is not a high security risk today.
Filter 4: Probability of Exploit
Using EPSS, we usually recommend a threshold of 3-5% possibility of exploitation, but this will vary depending on your organization’s risk tolerance.
Filter 5: High and Critical Severity
While it is best practice to remediate all vulnerabilities regardless of criticality, the CVSS score constitutes a natural factor for prioritization and most teams choose to start with the Criticals (> 9.0) and Highs (> 7.0).
Case Study with Endor Labs
We enable every customer to prioritize SCA findings using the factors listed above. To illustrate a real-world outcome of this process, we’ll look at the Endor Labs Dashboard for a large tech customer that uses Python. Using program analysis, they discovered that they had nearly 16K dependencies across 15 projects. This company, which is subject to regulations including PCI DSS and the FDA’s new medical devices regulations, would certainly struggle to manage that amount of security debt while staying compliant. But after applying customizable prioritization filters, they were able to reduce noise by 99.97%!
- Total open vulnerabilities: 134K
- In production: 131K
- Fix available: 128K
- Reachable: 825
- EPSS > 8%: 42
Using our Total Economic Impact calculator, which assumes an average of 8 hours to investigate each vulnerability, this process is saving the company more than 1 million developer hours for a cost savings of $74.9M.
Now let’s look deeper at their findings. As we can see in the dashboard, of their 16K dependencies, 81% are transitive dependencies (meaning they are dependencies of dependencies). This is significant because it relates back to the completeness of this company’s data. If they were using an SCA tool that cannot find transitive dependencies, the AppSec team would be starting off with an incomplete picture of their software inventory. Further, research indicates that 95% of vulnerable dependencies are transitive so the consequences of failing to discover transitive dependencies are clear: Known vulnerabilities could remain undiscovered by the AppSec team and allow bad actors an opportunity to exploit. In the case of this customer, all 42 of their prioritized findings are in transitive dependencies. Six of those findings have critical severity, including multiple CVEs leading to remote code execution.
Using Endor Labs to Automate Identification and Prioritization of Known Vulnerabilities
Endor Labs allows you to define rego-based policies that give you fine-grained control over what gets surfaced. For known vulnerabilities, you can create one or more action policies that, given a custom set of filtering criteria, warn developers, block commits, generate jira tickets, emails, slack-notifications, or other custom ticketing and/or messaging workflows, etc. when a new vulnerability meeting your custom criteria is detected.
Get Started with Endor Labs
Endor Labs Open Source includes reachability-based SCA that helps you find known OSS vulnerabilities, prioritize remediation across multiple filters, and set policies to detect and route new findings.
To compare results against your current tool, Endor Labs offers a free, full-featured 30 day trial that includes test projects and the ability to scan your own projects.