How CycloneDX VEX Makes Your SBOM Useful
Explore the challenges of modern vulnerability management and the efficiency of the Vulnerability Exploitability eXchange (VEX) in our latest blog post. Learn how VEX helps identify and communicate the true exploitability of vulnerabilities, streamlining cybersecurity efforts in the face of overwhelming scanner findings.
Explore the challenges of modern vulnerability management and the efficiency of the Vulnerability Exploitability eXchange (VEX) in our latest blog post. Learn how VEX helps identify and communicate the true exploitability of vulnerabilities, streamlining cybersecurity efforts in the face of overwhelming scanner findings.
Explore the challenges of modern vulnerability management and the efficiency of the Vulnerability Exploitability eXchange (VEX) in our latest blog post. Learn how VEX helps identify and communicate the true exploitability of vulnerabilities, streamlining cybersecurity efforts in the face of overwhelming scanner findings.
Explore the challenges of modern vulnerability management and the efficiency of the Vulnerability Exploitability eXchange (VEX) in our latest blog post. Learn how VEX helps identify and communicate the true exploitability of vulnerabilities, streamlining cybersecurity efforts in the face of overwhelming scanner findings.
Explore the challenges of modern vulnerability management and the efficiency of the Vulnerability Exploitability eXchange (VEX) in our latest blog post. Learn how VEX helps identify and communicate the true exploitability of vulnerabilities, streamlining cybersecurity efforts in the face of overwhelming scanner findings.
As anyone in the information security world knows, if you run a vulnerability scanner against any modern network or product, you’ll get a lot of CVE and other vulnerability findings: 40,000, 130,000, or potentially even more!
The thing is, though, only a very small minority of these are exploitable. Estimates vary, but 10-20% is a conservative estimate.
So, finding out which ones an attacker can actually use is a critical cybersecurity task. Especially because research has shown that, on average, organizations generally can only fix 5-20% of the vulnerabilities in their network per month, you’ll need to be very selective in which issues you target for remediation.
Tools like the Exploit Prediction Scoring System (EPSS) can help you cut through the noise substantially. And when you add reachability analysis on top, you can determine exactly which issues represent risks and which do not.
Once you have applied these approaches, an important next step is communicating the results of this type of analysis externally.
The state of the art: custom pages, portals, and spreadsheets
Assuming your organization developed a piece of software and did either manual or automated investigation of a given vulnerability in it, you’ll want to make sure the results of the investigation are available to everyone using your products. Conversely, if you are a software customer, you’ll want the publisher (whether a commercial vendor or open source organization) to respond as quickly and accurately as possible to inbound inquiries.
Unfortunately, the state of the art for this process right now relies on using custom-built web pages or alerts on security status portals. Many organizations are even less sophisticated than this and use emails and spreadsheets to communicate the exploitability (or lack thereof) of vulnerabilities in their stack.
Especially when under time pressure, as is often the case during security crises like the log4shell (CVE-2021-44228) saga, these tools and techniques are simply not good enough. And when you scale the problem by the thousands of findings that are present in pretty much every network, it gets really bad.
To top it all off, the increased use of software bills of materials (SBOMs) are making this problem even worse. Although increasing transparency of organizational software supply chains is generally a good thing, a quick analysis of the components in an SBOM will likely reveal a lot of apparently (but not actually) vulnerable components.
Without an efficient way to communicate about the relative risk posed, organizations are going to waste incredible amounts of time merely corresponding about vulnerabilities in their code and networks.
Enter the VEX
Thankfully, there is a solution to this problem: the Vulnerability Exploitability eXchange (VEX).
By using a machine-readable format to detail the true exploitability of a vulnerability in the wild, software publishers and customers using the VEX can be much more concise and rapid when communicating. Furthermore, the relevant fields and their values provide a mutually exclusive and completely exhaustive way of describing vulnerabilities, reducing uncertainty during emergencies.
There are several VEX formats, but we’ll focus on the one offered by the CycloneDX SBOM standard in this post due to its flexible functionality and multiple use cases. Although it is well-documented (check out the vulnerabilities section of the CycloneDX reference), we’ll take a closer look at the meaning of some of the various fields in this post.
VEX fields
Of most interest to security professionals is the analysis sub-field of the vulnerabilities section. This is where the key information in a VEX statement is, related to the risk posed by a vulnerability and the publisher’s (planned) response to it. The analysis sub-field has four components, each with multiple options:
state
This is probably the most important thing to look at when reviewing a VEX report, especially if you are in a hurry. The state describes the publisher’s opinion regarding the vulnerability in question, most notably whether or not it assesses it to be exploitable. The potential values for this field are:
resolved
Pretty straightforward here.
This just means the publisher has introduced some kind of code or configuration change that eliminates the issue in question.
A crucial consideration, however, is whether the fix is complete. During the rush to ship a patch during crisis situations, developers will sometimes fail to fully resolve the flaw (or even introduce new ones). So if a publisher reports an issue as resolved, it’s important to conduct your own diligence on the fix and potentially even assume some (low) level of exploitability.
resolved_with_pedigree
Similar to the above, this status implies that the code change created a new version of a component named elsewhere in the SBOM. This is especially useful if a development team needs to “fork” an open source dependency and maintain their own version of it.
exploitable
Probably not the result you want to get when querying the state field, but it’s better to know than to be in the dark.
If this is the case, the publisher assesses there is some chance a malicious actor could take advantage of the vulnerability. For such issues, you should focus your analysis on determining what the real-world likelihood of this happening is, and rapidly applying the appropriate controls (which the publisher should include in the response field).
in_triage
Also self-explanatory. This means the publisher is still looking at the issue. This should generally be the default state for any newly-discovered vulnerability. And if a publisher is making its VEX statements available to the general public without authentication, it should probably mark exploitable vulnerabilities with this value until a patch or other fix is available.
false_positive
This means the supposed vulnerability does not in fact exist in the SBOM in question. Scanning tools can be fickle and sometimes will report inaccurate results. If the publisher identifies such a situation, it makes sense to report the results proactively using the VEX format.
not_affected
Security teams will be crossing their fingers that this is the vulnerability’s state, as it means the component or service cannot be impacted by the given issue.
If a publisher reports a product as being not_affected or that the vulnerability is a false_positive, it should provide an entry in the justification section. This isn’t necessarily the end of the story, though. People make mistakes and it’s always conceivable that a clever attack finds an attack vector that developers haven’t. That’s why the next section also warrants review.
justification
It probably wouldn’t be a great idea to take not_affected at face value without any further information. That is why the justification field is incredibly important. A close second in importance to knowing that a vulnerability is not exploitable is knowing why it isn’t. The justification field tells you exactly that.
With that said, not all justifications are created equally. If the impacted code is indeed not present, there is no way it can be exploited, so it's reasonable to consider the risk as trivial.
A compensating control such as an antivirus or endpoint protection system, however, even if well-tuned, cannot guarantee no malicious actor will ever access a given vulnerability. Thus, it may make sense for organizations to categorize risks differently based on different justification statements.
For example, if the publisher states code_not_present, it may be fair to assign this a trivial or zero chance of exploitation. If, on the other hand, the flaw is protected_by_mitigating_control, then you may want to assign a non-zero (albeit small) likelihood of this issue being used in an attack.
code_not_present
This option indicates that the vulnerable component is present, but the specific code with a flaw in it is not. This could be the result of continuous integration (CI) actions or the like which removes anything that could be exploited.
code_not_reachable
If you know anything about Endor Labs, you’ll know that this is the case for the vast majority of CVEs. Check out this deep dive on the topic here for details on why reachability analysis is so important. Whether you use our product to determine this or arrive at this conclusion using manual analysis, this will probably be a commonly-used field.
requires_configuration
Oftentimes, exploiting a given vulnerability requires a certain set of configurations to be in place. For example, CVE-2017-8283 is only an issue for operating systems that don’t use a certain setting by default, which isn’t the case in distributions of Ubuntu where this issue is found. Thus, taking advantage of this specific flaw requires_configuration which is not present.
requires_dependency
Some vulnerabilities require that certain other pieces of software - such as open source libraries - be present in order to make them useful to an attacker. If such a dependency is not in place, and there are no other ways to make use of the flaw, then it is safe to categorize the issue in question as requires_dependency. This is essentially the reverse case of not_reachable.
requires_environment
Especially in the cloud, understanding what sort of environment a piece of software is operating in is critical to evaluating its security. Sometimes it can be the case that one type of hosting environment will expose an issue to an attacker while a different one will not. Software publishers can make this clear by using this entry in the justification field.
protected_by_compiler
In certain cases, a compiler setting can render a vulnerability non-exploitable by enabling specific security features or optimizations that alter the behavior of the compiled code. For instance, setting a flag that enables stack canaries would make buffer overflow attacks more difficult to execute successfully.
protected_at_runtime
Even if the code is otherwise vulnerable to an attacker, it is possible the application in question prevents an attack through its design. For example, if exploiting a vulnerability requires the attacker to pass a specially malformed JSON file and the application validates all inputs to block the passage of such a file, there is minimal chance an attacker can access it.
protected_at_perimeter
Similar to the above, it’s possible that an organization can prevent exploitation of a known vulnerability in an application by blocking the ingress of certain types of traffic into its network. If attacking a certain flaw requires access to a certain port that is completely blocked by an organizational firewall, it’s fair to say that the issue is not exploitable in the given context.
With that said, this isn’t necessarily foolproof, similarly to the last option:
protected_by_mitigating_control
It’s conceivable that some sort of compensating mechanism can ensure no hacker will be able to use a certain software flaw to conduct an attack. An example of this might be a well-tuned intrusion prevention system that automatically blocks malware attempting to exploit a given vulnerability.
Again, this isn’t a silver bullet, so don’t ignore a VEX report that claims a vulnerability is not_affected because of this type of justification.
Response
If a software publisher identifies an issue as exploitable, you are rightly going to want to know how that organization plans to address the problem. The good news is that the response field tells you just this. And understanding what a given publisher or other software maker plans to do (and what they recommend in the meantime) can be a huge help if you are struggling with an array of known vulnerabilities or find yourself in a crisis situation.
can_not_fix
Unfortunately, in some cases it is functionally impossible to resolve a given software issue without causing more severe damage through breaking changes or other consequences. In this case, a publisher might “throw in the towel” and state that they cannot fix the given issue. While few things are impossible, this status implies that no reasonable effort would ever have a chance of resolving the issue in question.
will_not_fix
Although simple and similar to the above, many customers will likely be unsatisfied with this response. If a software publisher says it won’t resolve something, this is likely to cause some consternation with their customer or user base.
With that said, if pursuing a truly risk-based approach, it might make sense to do just this in some circumstances. If the costs of the remediation effort would far outstrip the potential security gains and there are good compensating controls in place, it might just make sense to leave certain issues alone.
update
Probably the “cleanest” response to a vulnerability is to apply a code fix. Whether the publisher updates its production deployments in the case of a Software-as-a-Service (SaaS) product or makes a patch available to customers running it in their own environments, an update can potentially resolve the issue completely.
With that said, remember that not every patch is perfect. Even if a publisher believes it has fixed the issue, resourceful attackers might prove otherwise.
rollback
In the case of a security regression - whereby a publisher accidentally introduces a new flaw that was not present beforehand - rolling back to an earlier version might be appropriate, depending on the risk posed by each course of action. While this might be easy to do if the update was recent, if your organization has been using the newer version for a long time, key business processes might be disrupted in the case of a rollback. As with every security question, you’ll need to weigh the costs and benefits closely when making a decision.
workaround_available
Sometimes a pure code fix isn’t necessarily the fastest or even best solution to an identified security vulnerability. During the log4shell crisis, some organizations recommended configuring a web application firewall (WAF) to block exploitation attempts. Such workarounds can be temporary measures needed to buy time until an update can be released or could be longer-term fixes for especially thorny issues.
detail
This is a free text field where publishers can get into the weeds of a given vulnerability and its implications. While difficult to analyze in an automated way, it can provide additional context to security teams that are especially concerned about a given issue. In addition to giving further justification as to why a vulnerability is not exploitable, publishers can offer more color on the methods of analysis used as well as the plans to resolve a given finding.
Security teams should prioritize examining this field for vulnerabilities with a high probability of exploitation. For less pressing issues, however, it might make sense to focus more on the easily machine-readable VEX fields.
Conclusion
Modern vulnerability management requires sorting through a huge amount of noise. Thousands of scanner findings, news reports, and security researcher inputs can sometimes overwhelm teams and make it hard for them to react effectively. Thankfully, the VEX standard promises to greatly improve the situation.
Instead of requiring labor-intensive email and spreadsheet reviews, organizations can communicate relatively seamlessly using a standardized format.
Enterprises that have effective tools, techniques, and procedures for making use of the VEX protocol are certain to save huge amounts of time and energy in their application security, risk management, and incident response efforts.
And if you want to see how Endor Labs leverages VEX to do this, check out our demo library!