By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

Open Source Security 101: How to Evaluate Your Open Source Security Posture

Written by
A photo of Chris Hughes — Chief Security Advisor at Endor Labs.
Chris Hughes
Published on
November 16, 2023

The role of open source software (OSS) in the modern digital landscape has accelerated tremendously in the last decade. Numbers vary, but several studies demonstrate that modern codebases and applications are composed of 60-80% OSS,with some estimates projecting that 97% of all software contains at least some OSS code. OSS is pervasive in not just consumer goods and software but critical infrastructure and national security systems. In fact, many organizations, including the U.S. government, actively encourage use of OSS.

However, the industry’s approach to securely using OSS hasn’t evolved. Researcher Chinmayi Sharma penned an excellent article focusing on OSS security and how our digital infrastructure is “built on a house of cards”. There are alarming metrics when it comes to the state of insecurity of most OSS usage. This includes figures such as:

  • 88% of codebases containing components have had no new development in two years.
  • 85% of codebases containing OSS components were more than four years out of date.
  • 81% of codebase containing OSS components have one or more known vulnerabilities. 

These metrics are just the tip of the iceberg of the vulnerable OSS attack surface that lurks across our modern digital landscape. This isn’t to say that OSS is inherently bad, or shouldn’t be used, but it is to say that organizations need to evolve their approach to using OSS securely. So let’s discuss four fundamental considerations organizations should look at when it comes to securely using OSS:

  • Inventory
  • Vulnerabilities
  • Selection
  • Project Health 

Inventory

First and foremost, organizations need to have a foundational understanding of the OSS they’re using. Software asset inventory has been a critical security control for several years (even decades), but most organizations have a poor understanding of their OSS consumption and usage. Utilizing tools such as Software Composition Analysis (SCA), organizations can begin to enumerate the third-party components in their applications and code. Additionally, we’re of course seeing a push for the adoption of Software Bill of Materials (SBOM) to enable transparency about software components. 

Vulnerabilities

Having an inventory of OSS components is foundational but that’s of limited utility without knowing which components are actually vulnerable. Again, this is where tools such as SCA can provide insight into the known vulnerabilities associated with the components in the code base. These tools typically compare the components with vulnerability databases such as NIST’s National Vulnerability Database (NVD) and the quickly growing Open Source Vulnerability (OSV) database among others, such as GitHub. 

Another key advancement in the vulnerability management space is the shift away from  focusing on just on vulnerabilities and a shift toward coupling vulnerabilities with enrichment data around known exploitation and exploitability. The reality is that less than 5% of all known vulnerabilities in sources such as NIST NVD are ever exploited in the wild. To deal with this reality, organizations are turning to sources such as CISA’s Known Exploited Vulnerabilities Cataloge (KEV) and the Exploit Prediction Scoring System (EPSS) to help prioritize vulnerable components that are either known to be exploited (e.g. KEV) or likely to be exploited in the next 30 days (e.g. EPSS). 

Vulnerability information is great, but inevitably can and will be noisy because organizations use so many components, many of which may have associated known vulnerabilities. To mitigate this noise and toil that typically gets thrown onto the engineering and development teams with legacy SCA tooling, we’re seeing the increased adoption of next-generation SCA tooling, which includes critical insight such as reachability. This means you get visibility into not just what is vulnerable but also whether it’s actually reachable within the code/application, so you can prioritize those vulnerabilities accordingly. 

If you aren’t using the above capabilities (such as focusing on exploitation and reachability), you’re inevitably dumping toil and frustration on your engineering and development peers, costing them tremendous time and the business significant costs. 

Finding vulnerabilities in dependencies is key but for a more detailed deep dive on what to do next, I recommend another Endor Labs blog from Alexandre Wilhelm titled “You found vulnerabilities in your dependencies, now what?”

Selection

After you get a handle on your existing OSS footprint and begin to understand the vulnerabilities associated with it, a logical next step is “shifting left” to making more risk informed component selections. This is where tooling that can integrate into developer workflows and help empower them to make risk-informed decisions around what OSS components they integrate into applications and code is crucial. This insight can range from data about vulnerabilities and exploitation to insights into project health, versioning and licensing concerns. All of which are much easier and more logical to address prior to integrating them into codebases than after, especially prior to the components reaching production runtime environments.

Project Health

While vulnerability information is absolutely crucial, known vulnerabilities are a lagging indicator of risk. This means, they are already vulnerable, you know it, and rest assured, so do  malicious actors. This is where projects such as OpenSSF Scorecard are useful in helping organizations understand the hygiene and security posture of projects they’re consuming, or considering using. The OpenSSF Scorecard looks into a myriad of factors such as branch protection, code review, contributor density, maintenance and much more. 

Conclusion

While this list is far from exhaustive of the security measures organizations can and should take when it comes to securely using OSS, it is a great start. Understanding what you have, its vulnerability footprint, making risk informed component selection and monitoring project health will set you on a trajectory to mitigating some of the most evident risks associated with OSS. You should want to have awareness not only into the OSS components you’re using as an organization, but also those embedded in the software and products you’re consuming from third-parties such as your vendors, especially with the growth of software supply chain attacks and malicious actors increasingly targeting software suppliers and widely used OSS components. Lastly, I strongly recommend checking out the “Top 10 OSS Risks” which Endor Labs has previously published.

The Challenge

The Solution

The Impact

Get new posts in your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get new posts in your inbox.

Get new posts in your inbox.

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Get new posts in your inbox.

Get new posts in your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get new posts in your inbox.