By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

Meet the application security platform built for the AI era

The era of vibe coding is here. Learn how Endor Labs is helping AppSec teams secure and fix AI-generated code with a new agentic AI platform.

The era of vibe coding is here. Learn how Endor Labs is helping AppSec teams secure and fix AI-generated code with a new agentic AI platform.

The era of vibe coding is here. Learn how Endor Labs is helping AppSec teams secure and fix AI-generated code with a new agentic AI platform.

Written by
Amod Gupta
A photograph of Dimitri Stiliadis — Co-Founder and CTO, Endor Labs.
Dimitri Stiliadis
Published on
April 23, 2025
Topics

The era of vibe coding is here. Learn how Endor Labs is helping AppSec teams secure and fix AI-generated code with a new agentic AI platform.

The era of vibe coding is here. Learn how Endor Labs is helping AppSec teams secure and fix AI-generated code with a new agentic AI platform.

It’s a big day at Endor Labs. We’re announcing a major expansion of our application security platform—and yes, a $93M Series B to accelerate what we’re building with it. At the center of today’s launch is a new platform architecture designed to meet the challenges of AI-generated code. It’s powered by an agentic AI framework and deep intelligence into the open source code that today’s AI models are trained on.

We’re launching the first capabilities built on this platform today:

  • AI Security Code Review: Discover architectural changes that could affect the security posture of your applications by using Endor Labs’ AI agents to review every pull request (PR).
  • Endor Labs MCP Server: Detect and fix vulnerabilities in AI-generated code — before they leave the IDE — when you integrate Endor Labs’ scanning with tools like GitHub Copilot and Cursor.

Together, these features will help AppSec teams become catalysts for secure AI adoption in their organizations by finding and fixing risks from AI-generated code earlier in the software development lifecycle and without disrupting developers.

And this is just the beginning. Endor Labs is racing forward to build the next generation of solutions to secure AI applications, code, and agents. Those solutions will be powered by our new platform that combines purpose-built AI agents for AppSec teams with deep application context, proprietary vulnerability intelligence, and persistent memory.

The era of vibe coding is here

We’re in the middle of the software development revolution. Until recently, 80% of code came from open source. Moving forward, 80% of code will be generated by AI. That future isn’t far off—it’s already reshaping how software gets built today.

Engineering leaders are embracing the productivity gains from AI. According to the 2024 DORA Report, 75% of developers now use AI coding assistants like GitHub Copilot and Cursor. GitHub estimates that up to 40% of today’s code is AI-generated—and that number is accelerating.

This is the vibe coding era, where AI coding assistants generate large volumes of code with minimal developer oversight or review. Developers increasingly trust their AI assistants, often accepting suggestions with little modification. It’s fast, efficient, and transformative—but it’s also risky.

AppSec teams are being left behind. Today’s tools surface millions of alerts per year. If nothing changes, that number could balloon to five million, flooding teams with noise while the most critical risks slip through. The business is moving fast, and AppSec teams are expected to keep up.

This is AppSec’s moment to lead

While developers enjoy massive productivity gains aided by tools like GitHub Copilot and Cursor, security teams face a growing backlog—not just more code to review, but also more alerts and vulnerabilities.

New research shows that 62% of AI-generated code suggestions contain design flaws or security vulnerabilities, even when using the latest models. Another study found that nearly 30% of AI-generated code snippets include exploitable security flaws. Despite these risks, over 77,000 organizations have adopted AI coding assistants over the past two years to increase developer productivity, often without a plan to assess or mitigate the security impact.

And the risks go beyond volume. AI-generated code can introduce deep architectural changes that impact the security posture of entire applications. These include new authentication flows, API endpoints, sensitive data handling, and cryptographic implementations that traditional Static Application Security Testing (SAST) tools weren't built to detect.

But this isn’t just a challenge—it’s an opportunity.

With the right platform and tools, AppSec teams can become catalysts for secure AI adoption in their organizations, embedding protection earlier in the development lifecycle and enabling the business to move fast without compromising safety. But to do that, AppSec teams must re-tool for this new reality.

The AppSec platform built for AI-native software development

With tools like Cursor and Copilot generating code in real time, security needs to be embedded at the moment of creation—not waiting for a scan downstream at build time or in the CI/CD pipeline. Traditional scanning might take minutes, but fixing issues often takes weeks or months—if they're caught at all.

AI coding assistants make it easier to implement fixes, but only if they have accurate application context and vulnerability data to surface meaningful risks quickly, along with specific guidance on how to fix risks without breaking the application.

That's why Endor Labs developed the next generation of our application security platform with these requirements in mind.

At the core of the platform are dedicated AI agents built specifically for application security. These agents reason about code like developers, architects, and security engineers. They work in concert to review code, identify risks, and recommend precise fixes—extending the capabilities of security teams without creating developer friction.

To support these agents, we’ve layered in everything they need to make smart, context-aware decisions:

  • Advanced analysis: Our scanners for SCA, SAST, container security, and secrets detection create a comprehensive graph of your application and everything it depends on. This graph enables our AI systems to understand context when recommending fixes.
  • Code intelligence: A rich dataset of vulnerability data, language call graphs, and embeddings derived from scanning 4.5 million open source libraries and AI models.
  • Persistent memory: The platform learns from your team’s feedback and decisions, improving its recommendations over time while adapting to your specific codebase and preferences.
  • Flexible orchestration: With API, CLI, and MCP (Model Context Protocol) support, every function of the platform is available wherever you need it—from the IDE to CI pipelines.

Why open source expertise matters for AI code security

This architecture is powerful on its own, but what truly sets Endor Labs apart is the data that powers it.

AI-generated code isn’t truly novel. It’s assembled from patterns, snippets, and logic that already exist—most of it pulled from open source code. This is why detailed knowledge of the open source ecosystem is critical for accurate security analysis. Our platform leverages years of investment in mapping code structures, execution patterns, and vulnerability paths.

Here’s what our data foundation includes:

  • Annotated vulnerability database: We maintain precise annotations of vulnerable code at the line level across millions of open source packages. Our system performs 150+ distinct security and quality checks on every open source library and AI model, with our database covering more than a billion data points.
  • Language call graphs: We’ve indexed billions of functions across 4.5 million open source projects and libraries in all major programming languages. This allows us to trace actual execution paths through your application rather than just identifying package dependencies.
  • OSS code embeddings: Our system maintains over 500 million vector embeddings that enable us to detect code reuse or transformation—even when AI assistants have renamed variables, refactored methods, or restructured logic. This enables more accurate security and license compliance analysis at the code level.

This depth of insight doesn’t just help us detect what’s wrong—it helps AI tools fix it intelligently. When a coding assistant sees a risk, our data helps it choose the safest path forward.

Fix vulnerabilities in AI-generated code at the source

One of the most powerful opportunities in the AI era is to catch security issues as code is being generated rather than after the fact. This helps security and engineering teams work more efficiently together.

That’s why we developed the Endor Labs MCP Server, the first capability built on our new platform architecture. It integrates directly with AI coding tools like GitHub Copilot and Cursor, embedding security analysis into developer workflows—before a pull request is even created.

In practice, here’s how it works:

With the Endor Labs MCP Server, developers receive security insights while coding, directly in their chats with AI coding assistants.These insights include detailed vulnerability information and actionable remediation guidance.

For example, if a developer adds code that introduces a vulnerable dependency, the MCP Server: 

  1. Detects the vulnerable package
  2. Analyzes how the application actually uses the vulnerable functions
  3. Determines which versions contain the fix without introducing breaking changes
  4. Provides specific upgrade guidance with compatibility information

The AI coding assistant can then use this data to automatically generate a secure fix—either by suggesting a specific version upgrade or by rewriting the affected code to maintain compatibility while eliminating the vulnerability.

This transforms what used to be a weeks-long process involving security tickets, developer back-and-forth, and manual fixes into an automated workflow that resolves issues in minutes.

This approach is particularly effective for addressing vibe coding workflows, where developers might otherwise accept AI-generated solutions without doing a close review. Instead of disrupting the rapid development benefits that AI assistants provide, the MCP Server brings security directly into that conversation, allowing developers to maintain their productivity while automatically addressing security concerns.

Detect security architecture changes using AI agents

Traditional scanning tools excel at finding known weaknesses and vulnerabilities. But AI-generated code introduces architectural and security design risks that are much harder for pattern-matching tools like SAST to detect. Currently, AppSec teams rely on manual code review to catch these risks, but this approach cannot keep pace with the volume of AI-generated code.

That’s where AI Security Code Review comes in.

It’s the first agentic capability on our platform. It uses a team of specialized AI agents—acting as developer, architect, and security engineer—to analyze, categorize, and prioritize security-relevant changes in every pull request. 

Unlike traditional scanners that focus on known vulnerability patterns, our system detects higher-level architectural changes that impact your security posture. For example:

  • Addition of AI systems that are vulnerable to prompt injection
  • Modifications to authentication or authorization mechanisms
  • Introduction of new public API endpoints
  • Changes to cryptographic implementations 
  • Alterations to sensitive data handling

Using our secure code review agents, your team can:

  • Surface high-risk changes buried in thousands of pull requests
  • Cut false positives and alert fatigue with contextual prioritization
  • Help security engineers focus on the changes that matter

Endor Labs ensures your security team maintains architectural oversight—even as development velocity increases with AI assistance—by providing precise, actionable insights on the changes that truly matter to your security posture.

The future of AI code security

Today's launch of our MCP Server and AI Security Code Review represents the first phase of our vision for securing the future of AI-native software development. Our roadmap includes additional capabilities planned for release in the coming months, all focused on providing security teams with actionable intelligence and automated remediation tools.

AI Code Security Review will be available to Endor Labs customers in May. To see it in book a meeting at RSA Conference, or contact sales to schedule a live demo in May following RSA.

The Challenge

The Solution

The Impact

Book a Demo

Book a Demo

Book a Demo

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Book a Demo

Book a Demo

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Book a Demo