By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

Blocking with Confidence: Relativity's Dev Experience Journey

Relativity changed their security program from a blocker to an enabler by integrating security into developer workflows and empowering developers to prevent risks before they ship to production.

Relativity changed their security program from a blocker to an enabler by integrating security into developer workflows and empowering developers to prevent risks before they ship to production.

Relativity changed their security program from a blocker to an enabler by integrating security into developer workflows and empowering developers to prevent risks before they ship to production.

Written by
Joni Musa
Raphael Theberge
Published on
September 24, 2024

Relativity changed their security program from a blocker to an enabler by integrating security into developer workflows and empowering developers to prevent risks before they ship to production.

Relativity changed their security program from a blocker to an enabler by integrating security into developer workflows and empowering developers to prevent risks before they ship to production.

This blog is by members of the Relativity security team, Joni Musa and Raphael Theberge.

The security profession is always evolving, and one of the best improvements in recent years is the shift towards developer experience. Relativity, a global legal technology company and SaaS provider, already has a focus on enabling developer productivity. Our Calder7  security team aligns to that goal with our mission to to help the engineering department accelerate their day-to-day tasks by reducing the cognitive load of security, but still ensuring that they deliver software quickly and securely. 

Changing how Relativity approaches security

Like many organizations, Relativity had a traditional security program in which the security team and tools were outside the software development lifecycle (SDLC). In recent years, we found that strategy wasn’t reducing risk effectively and it was slowing down release velocity. Security was more of a blocker than an enabler. So we decided to try a new way of doing things, where security is there to assist others. While that’s what every security person wants to be doing, in reality our ways of working didn’t make that possible.

Our new mission is to make it possible for engineering and product management to manage risk in a way that makes sense for their products. At Relativity, everyone already understands the importance of security because of the types of products we make. With that buy-in, we were confident that we could delegate and democratize security. So it was a natural next step to reach out to our partners across the business and say We know security is cumbersome for you. How can we lower the friction while keeping our high standards? And people were happy to jump in because they knew we were committed to making their lives easier.

The result of this effort is a secure SDLC effort we call Blocking with Confidence. Designed to enable faster response and remediation of the highest priority risks (and reduce the exposure of our organization), this program lets developers identify and resolve security risks before code gets shipped to production. To get this program off the ground, we needed to incorporate:

  • How will we make it developer-friendly?
  • How will we ensure developers make safe decisions?
  • What requirements and regulations need to be considered?
  • What KPIs indicate success or issues?

Making security developer-friendly: democratize knowledge and tools

In a more traditional security program, the AppSec team gets alerts about new risks, triages manually based on factors like CVSS, and then assigns out tickets to developers. But our vision for Blocking with Confidence is developers manage security without actually intervention or oversight from the security team. In fact, we consider it a minor failure if a developer has to reach out to us with a question about a finding, because that means we didn’t get all the necessary information into their hands via automation.

There are several ways we help developers to make “micro decisions” on a day-to-day basis, from a Security Champions program to embedding security within their workflows. For example, we integrate application security testing tools, like software composition analysis (SCA), directly in GitHub. When a developer submits a pull request (PR), the tool identifies whether their open source dependencies have an unacceptable level of risk. The developer is notified (in GitHub) and can swap out the risky dependency on the spot. Or maybe they want to accept a risk. In that case, the responsible product manager can make a case directly to the business for the exception, without having to get permission from AppSec.

This process also results in applications that are secure by default. In the old way of doing security, there was a tremendous expectation that developers would be knowledgeable about the dependencies they choose. But that’s not the reality — knowledge levels will always vary, no matter how skilled a developer is. And for junior developers, the risk of a bad selection was much higher. But with automation of security in their pipelines, that junior developer doesn’t have to worry about making security mistakes; the tooling won’t let them.

Introducing new security tooling with care (and automation)

To make this program work, we needed to make changes in our security stack. Developers can be skeptical about new tooling, and we need to avoid pushback caused by poor change management. This is why we don’t just hand a new tool to developers. Instead, the security team acts as a filter: We build software helpers that match the needs of our engineers so they can consume the tooling in their secure SDLC process. We avoid using the UI when possible, and instead build tooling that sits in the middle, democratizing that data from a tool directly into ticketing systems like Jira. 

Ensuring developers make safe decisions: gates and guardrails 

We trust developers to use these tools and take action, and we verify this trust through additional security gates and checks. By using the same tools and processes given to developers, we automate the validation of their decisions before code is deployed. For example, if a developer decides to proceed with committing a PR that contains a risk, automation helps us confirm that the risk is acceptable or engage to make sure it’s remediated appropriately. In the example of exceptions, we have organization-wide visibility into the risk levels for each product — and this visibility is available all the way up to the board level (and you’d better believe they care). We can easily see outliers, which prompt cross-functional conversations about the product’s security posture.

Vulnerability management requirements and regulations: customer expectations and FedRAMP

Every organization has different requirements for what risks can be accepted vs remediated or mitigated. At Relativity, we remediate 100% of exploitable risks, regardless of severity level. There are two reasons for this:

  • We are a security company: Our customers have high expectations of the security and reliability of our products. 
  • FedRAMP compliance: We have U.S. federal government customers that require Relativity to be FedRAMP authorized, and that program requires all exploitable risks to be remediated.

Because we cannot accept exploitable risks (even if the probability is very low), we needed to make security as lightweight and automated as possible. This requirement further influenced our tool selection. For example, we needed tools that reliably triage findings — both to remove unexploitable risks and also to prioritize the remaining findings. This helps us meet SLAs, which can be very aggressive depending on the regulation. 

Measuring security program success

Mean time to resolution (MTTR) — the average time it takes to fully address a risk — is one of our most important metrics. The purpose of this metric is to see how effectively and efficiently we’re dealing with risk at scale. We also look at:

  • How quickly do we identify risk?
  • How accurate is our risk prioritization?
  • Are we fixing the highest risks first?
  • Are we preventing risks from getting into production?

We also regularly run pentests on our security tools to ensure confidence in what those tools tell us. Afterall, Blocking with Confidence hinges on automating risk detection using trusted tools. When a test indicates that a tool is giving inaccurate results, that is one indicator that it’s time to replace the tool.

Replacing an underperforming SCA tool

Like so many new programs, the initial rollout was fast and now we’re down to the fine tuning that can be more tricky to get right. Now it’s often about replacing tools to ensure what we have in place is supporting the program. The dream we’re trying to achieve is that security is so integrated into the developer experience that they aren’t aware they’re doing a “security task.” This will require a mix of education, process changes, improved automation, and integrated tools. We pick our tools based on how well they’ll get us to that dream without tremendous effort.

Unfortunately our SCA tool didn’t meet this bar: It didn’t fit into developer workflows and we didn’t trust what it told us about open source dependency risk. We weren’t able to achieve the vision of automating risk detection for this category.

Problem #1: Questionable inventory and risk correlation

To trust a tool, we have to understand (and agree with) how it finds risk and how it decides what’s high risk or exploitable. With our incumbent SCA, we couldn’t get sufficient data on what was exploitable, and it lacked transparency with how it came to conclusions. This problem can be traced back to how this tool constructed the dependency inventory: Manifest scanning. A tool that only uses this method looks at the manifest file, builds a dependency tree off of it, then compares the packages/versions to their database for known vulnerabilities. A “best guess” on our dependency inventory wasn’t acceptable for our risk tolerance framework.

Problem #2: Noise without context

A compounding problem was the tool was very noisy, meaning it “found” a lot of risk, but we didn’t know which were real. To filter results, it only provided basic information about a vulnerability that could be found on the internet (like CVSS), but didn’t tell us if the vulnerability could be exploited in our application context. Since we have to fix 100% of risks, we need high confidence in the exploitability and severity of each risk so we can prioritize work effectively. In addition to being unsure about the inventory it generated, we had no confidence in what it told us was most important to fix first.

Problem #3: Not automation-friendly

And finally, it wasn’t easy to integrate into our developer workflows. The user experience was originally designed solely around the UI. This meant its API was missing a majority of capabilities that the UI could do, and lacked in overall functionality. TL;DR, it didn’t support the effort to automate at scale. It was also disappointing from a policy management perspective: We could only set “if this, then that” rules that only allowed us to block a build (e.g. if a critical or high, fail the build). This runs the risk of breaking builds unnecessarily, which erodes developer trust.

Requirements for a New SCA Tool

When we run a Proof of Concept (POC), we go in with an “intent to buy” mindset. This means we have a great idea of what we want (improvements and experience) and an intention to work with the vendor to make it happen. From a developer first perspective, the main priority is How is this tool going to help our Blocking with Confidence program? How is this going to help our metrics? 

To achieve this, we needed a new tool to do three things:

  • Requirement #1: Automated, trustworthy prioritization of vulnerability alerts
    Be able to “block builds with confidence” with clear, systematic evidence for why an issue is severe enough to block. Developers and AppSec engineers have data they can use to understand/adjust severity ratings.
  • Requirement #2: Increase developer self-sufficiency
    Manage policies centrally to reduce the number of tickets that require security personnel to be in the loop and the number of tickets that go “silently unaddressed.” Developers can identify and remediate within SLAs.
  • Requirement #3: Gain visibility to application makeup
    Clearly identify all open source libraries and provide visibility into the risks those dependencies bring into our applications. AppSec engineers are confident that all risk is being discovered.

Selecting Endor Labs for our SCA replacement

We ran a successful POC with Endor Labs and selected them for our SCA replacement. They were very successful in two key areas that mapped to our POC requirements.

Program Analysis and Function-Level Reachability

First, we needed to determine whether they could perform automated, trustworthy prioritization of vulnerability alerts. When determining exploitability, it isn’t adequate to just look at whether the dependency could be available to the internet or live on a certain cloud asset. We also need to know if it’s exploitable from coding and software perspectives so we can decide if it requires remediation. Endor Labs’ engineering, product, and leadership teams share our perspective, which is why they offer function-level reachability. This means they can determine exploitability at the function level, and if something is marked unreachable, then we don’t need to worry about remediation. Endor Labs was the only tool that could provide reachability in a way that we trusted and could confidently integrate into Blocking with Confidence.

The way they determine reachability is also the way they help us gain visibility into the application make up. Using a technique called program analysis, they essentially create a simulation of our application at the time of build. This provides an accurate inventory of dependencies and how they interact with each other.

Ease of deployment and policy management

As I mentioned above, it’s our practice to build a helper to integrate the tool into workflows. But with Endor Labs, we didn’t have to do much from a software tooling perspective to get up and running. That sounds really easy to say, but with a lot of other tooling, we've had a lot of friction with their API. 

Endor Labs made our job easy in two ways: 

  • API-first approach: Endor Labs is an API-first company, meaning that anything you can do in the API, you can do in the UI and vice versa.
  • GitHub app: Endor Labs’ GitHub app continuously monitors projects for security and operational risk and we can use it to selectively scan repositories for SCA, secrets, and more.

Along the same theme, and related to increasing developer self-sufficiency, was the question of policy management. With Endor Labs, there’s a lot of flexibility with policies. You don’t have to break every single build whenever there’s an issue. Depending on the scenario, we can warn or just notify. And we’re not limited to filtering by CVSS. Endor Labs’ policies can be implemented based on EPSS, reachability, fix available, etc.

Results with Endor Labs

With Endor Labs successfully deployed, we could start measuring against our requirements. In the first week after turning on the tool, we saw an 80% reduction in risks we had to remediate, all due to reachability analysis — and we continue to see that number climb. For an organization that has to fix 100% of exploitable risks, getting an 80% reduction of our workload was huge. This caused an instant change in cognitive load for developers and the security team. Without the tedium and minutia of tracking down individual items that might not matter, we can focus on the remaining vulnerabilities that would impact customers and our FedRAMP compliance.

As you might expect, the volume of remediation work for our developers has drastically decreased. What’s more surprising is that a junior engineer can now remediate findings that previously required a senior or lead engineer. How can this be? It’s all about time. When we had to remediate every vulnerability, the time allotted to fix each vulnerability was very short. And only the very experienced could do it fast enough to meet our SLAs. But now that we only fix what we know is exploitable, the workload easily fits into our SLAs (including FedRAMP). Now anyone can be responsible for remediation, which means everyone gets that crucial skill development and exposure to security as a routine part of their job.

The security team rarely has to interface with Endor Labs because it’s so automated, and that’s a good thing. Sometimes a developer reaches out because they’re not sure what to do with a finding, but that’s a reflection of our processes maturing. When this happens, it’s a trigger for us to update our documentation or automation to ensure they have what they need to remediate independently.

The TL;DR on Endor Labs

Joni: If a friend asked for an SCA recommendation, I would tell them to seriously consider Endor Labs. It was the best tool that we found to show exploitability in the application itself. That’s what helps our security and engineering groups to identify risk and move quickly.

Raphael: It’s been a really good experience partnering with Endor Labs. There’s usually problems and issues that only show up after things are signed, and you grow used to that and normalize it. It’s refreshing to see that’s not always the case. The product delivers on what it says it delivers, which is not something I would say about a lot of tools out there.

About the authors and Relativity

Raphael Theberge is Director of Security Enablement at Relativity, where he owns all of Security Enablement for Relativity by reducing friction between security standards and engineering practices. This includes application security, vulnerability management, AI Security, and M&A.

Joni Musa is former Head of Security and Deputy CSO at Relativity. This includes cloud security and governance, identity access management, security enablement (application security, vulnerability management), cybersecurity (threat intelligence), incident response, security architecture (helps with general security guidance of the organization). And last but not least, security development, which creates security applications for Relativity end users.

The Challenge

The Solution

The Impact

Book a Demo

Book a Demo

Book a Demo

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Book a Demo

Book a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Book a Demo