AI Code Governance

Set standards for safe AI development

Detect and manage risk from open source AI models in your applications.

How it works

1

Discover models

Detect AI models from Hugging Face used in your applications

2

Evaluate risk

Screen AI models for questionable licenses, security problems, and other practices

3

Enforce guardrails

Set organization-wide policies for the adoption and usage of open source AI models

Loved by security teams, painless for developers at:

AI Code Governance

Set standards for safe AI development

Detect and manage risk from open source AI models in your applications.

Loved by security teams, painless for developers at:

How it works

1

Discover models

Detect AI models from Hugging Face used in your applications

2

Evaluate risk

Screen AI models for questionable licenses, security problems, and other practices

3

Enforce guardrails

Set organization-wide policies for the adoption and usage of open source AI models

No items found.

Detect AI models in your code

Find open source AI models that developers have integrated into your applications:

  • Scan Python applications for AI models from Hugging Face
  • Build an inventory of AI models used in your organization
  • Track and report the usage of AI models in your SBOM

Evaluate AI models for risk

AI models are dependencies that require the same risk assessment as other open source packages:

  • Help developers select safe AI models using a database of risk scores for open source models on Hugging Face
  • Screen AI models using 50 factors covering security, licensing, and operational risks
  • Identify AI models with questionable sources, practices, or licenses

Help developers make safer choices 

Use policies to prevent use of AI models that don’t fit your risk profile:

  • Use pre-built policies to immediately surface findings from risky models
  • Create custom policies aligned with your risk tolerance and standards
  • Warn developers, or break builds, when high risk models are detected

Book a Demo

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

Set standards for safe AI development

Detect and manage risk from open source AI models in your applications.

Discover models

Detect AI models from Hugging Face used in your applications

Evaluate risk

Screen AI models for questionable licenses, security problems, and other practices

Enforce guardrails

Set organization-wide policies for the adoption and usage of open source AI models

No items found.

Detect AI models in your code

Find open source AI models that developers have integrated into your applications:

  • Scan Python applications for AI models from Hugging Face
  • Build an inventory of AI models used in your organization
  • Track and report the usage of AI models in your SBOM

Evaluate AI models for risk

AI models are dependencies that require the same risk assessment as other open source packages:

  • Help developers select safe AI models using a database of risk scores for open source models on Hugging Face
  • Screen AI models using 50 factors covering security, licensing, and operational risks
  • Identify AI models with questionable sources, practices, or licenses

Help developers make safer choices 

Use policies to prevent use of AI models that don’t fit your risk profile:

  • Use pre-built policies to immediately surface findings from risky models
  • Create custom policies aligned with your risk tolerance and standards
  • Warn developers, or break builds, when high risk models are detected

Get a Free Trial

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Get a demo
of Endor Labs

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.