By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
18px_cookie
e-remove

CISA and NCSC's Take on Secure AI Development

A breakdown of the "Guidelines for Secure AI System Development document from CISA and NCSC.

A breakdown of the "Guidelines for Secure AI System Development document from CISA and NCSC.

A breakdown of the "Guidelines for Secure AI System Development document from CISA and NCSC.

Written by
A photo of Chris Hughes — Chief Security Advisor at Endor Labs.
Chris Hughes
Published on
November 30, 2023

A breakdown of the "Guidelines for Secure AI System Development document from CISA and NCSC.

A breakdown of the "Guidelines for Secure AI System Development document from CISA and NCSC.

Artificial intelligence (AI) is a growing topic of conversation from the perspectives of how it can be used to facilitate activities (such as security operations and secure coding) potential risks and pitfalls of insecurely using or integrating AI into broad aspects of our digital ecosystem without proper security and privacy considerations. The velocity of change has made it hard to be confident in making the right choices.

Luckily, entities including nation states and industry organizations are  producing guidance, best practices, recommendations and even regulation (such as in the case of the European Union) with regard to the use of AI. 

Just this week, we saw the release of the “Guidelines for Secure AI System Development” from the United States Cybersecurity Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC). The publication covers key topics such as why AI security is different and who is responsible for developing secure AI, as well as tactical recommendations (practices) across secure AI design, development, deployment, and operations. It also emphasizes key themes that have been advocated by CISA and in the U.S. National Cyber Strategy (NCS), such as Secure-by-Design and software suppliers taking responsibility for those roles within the broader software supply chain ecosystem. It even highlights specific principles CISA prioritized in their latest Secure-by-Design document, which are:

  • Taking ownership of security outcomes for customers
  • Embracing radical transparency and accountability
  • Building organizational structure and leadership so secure-by-design is a topic business priority

In this article we look at the guidance and some of the key takeaways. 

Why AI Security is Unique and Who’s Responsible

The publication opens by discussing some of the unique aspects of AI security, including: 

  • The use of models that allow computers to bring context to patterns, without explicit guidance from a human in some cases, and generating predictions, which can be used to drive decision making and action. 
  • Novel vulnerabilities and attack vectors, such as adversarial machine learning (AML), where attackers can cause unintended behaviors and consequences in AI and ML systems such as affecting a models performance, allowing users to perform unauthorized actions and extracting sensitive data.
  • Attack techniques such as prompt injection and data poisoning. 

While the “who” of responsibility and accountability can be a subjective topic, the guidance points to the complexity of modern AI and software supply chains as a further complicating factor. However, it borrows a concept we’re all too familiar with from cloud computing: The shared responsibility model.The guidance breaks it down as two primary entities:

  • The “provider” — handles activities such as data curation, algorithmic development, design, deployment and maintenance.
  • The “user” — provides input and receives outputs. 

This seems simplistic initially, however the guidance makes clear that providers often use third-party services, data, models, and more to facilitate the delivery of their own software and services. And much like in the cloud paradigm, the AI guidance points out that users generally lack sufficient visibility into underlying services and software that AI technologies are using, along with typically not having the technical expertise to fully understand the risks associated with the systems they’re using. 

The guidance makes the case that the providers of AI components should be responsible for the security outcomes of the users in their downstream supply chain and carry out activities such as informing users of risks that they have accepted which may impact users as well as advising users on how to use the components and services securely. This sense of ownership of course ties back to Secure-by-Design and language from the National Cyber Strategy and U.S. Federal Cyber leaders who emphasized the onus being on software suppliers, rather than downstream customers and consumers. 

Practices for Secure AI System Design

The guidanceIt emphasizes understanding risks during the design stage and utilizing techniques such as threat modeling to mitigate risk during the AI system design phase of the software development lifecycle (SDLC). 

Specific activities called out include:

  • Raise staff awareness of threats and risks
  • Modeling threats to your system
  • Designing your system for security as well as functionality and performance
  • Considering the security benefits and trade-offs when selecting your AI model(s)

Awareness of Threats and Risks

Organizations designing and using AI systems need to raise the awareness of their staff around the risks and threats to AI, much like broader cybersecurity awareness, but taking into consideration the unique aspects of AI. This includes system owners, senior leadership, technical staff (such as developers and data scientists), each because of  their unique role and ability to introduce risk through the use of AI to the organization and the systems involved. 

Threat Modeling

Another key activity is threat modeling, which involves understanding the potential threats to the system and associated impacts. AI’s unique attack vectors should be considered. Traditional threat modeling methodologies can be applied but through the lens of the use of AI, the data involved, potential attack vectors, and the relevant threats. 

Design for Function, Performance, and Security

The guidance stresses that decisions around system design should consider functionality, performance, and security. This of course requires context such as desired functionality, user experience (UX), where the system will be deployed, what sort of governance and oversight will be required, and so on. Organizations will inevitably struggle with the potential tradeoffs between performance and security, as well as competing incentives that introduce friction and rigor, such as speed to market, feature development, and revenue.

Selection Benefits and Tradeoffs

Specific considerations called out include the use of training new models, or using existing ones, utilizing external libraries, scanning and isolation and external API use, among others. Each of these decisions have both performance and security implications that should be mutually considered. There are also specific user interaction considerations for AI-specific risks, such as implementing effective guardrails for model input, secure default settings, user awareness around around riskier capabilities requiring uses to opt into them, versus being available by default, another nod to the Secure-by-Default approach CISA has advocated for elsewhere. 

Practices for Secure AI System Development

Moving past secure design, the guidance touches on secure AI system development. Specific activities calls out include:

  • Securing your supply chain
  • Identifying, tracking and protecting your assets
  • Documenting your data, models and prompts 
  • Managing your technical debt

It should come as no surprise that supply chain security is emphasized, given the rise and industry attention on topics such as software supply chain security. The guidance points out the need to assess and monitor the security of your AI supply chains throughout the SDLC and to require that suppliers adhere to the standards your own organization apply to software. 

The challenge,of course, is that organizations are making massive use of open source software (OSS) but OSS maintainers and contributors are not suppliers. You aren’t able to hold them to any standards, since most OSS is free to use, as-is, and they’re under no legal obligation to act in accordance with your desires or demands. This is why it’s critical to understand which OSS components, projects, and libraries are involved in your AI system development and implement proper security and governance. 

Supply Chain Security

The guidance points out that when utilizing external hardware and software components, they should be well-secured, documented, and verified. Given most organizations have a very poor understanding of their OSS usage and broader supply chain and dependencies, this recommendation is crucial. The guidance cites additional resources such as the NCSC’s Supply Chain Guidance and frameworks such as Supply Chain Levels for Software Artifacts (SLSA)

Identification, Tracking, and Protection of Assets

Another notable practice cited is the identification, tracking, and protection of assets. This is unsurprisingly given asset inventory has been a CIS Critical Security Control for years. With AI systems, this involves models, data, prompts, software, documentation, logs, and assessments. Organizations must understand where the assets exist, the state of their assessment for risks, and documentation of those risks for explicit acceptance. There is also the emphasis on practices such as version control, incident response, and business continuity with the need for restoring to known good states (e.g. backups). Lastly, there is the need to govern the data AI systems can access and utilize, as well as generate. 

Documenting Data, Models, and Prompts 

Documenting data, models, and prompts is listed as a key practice of secure AI system development. This means the full lifecycle of model management from creation, operation and so on, and the associated data sets of the models is captured and codified. Data to be documented includes sources of training data and using hashes or signatures to ensure integrity. 

Management of Technical Debt

The last practice in secure AI system development is the management of technical debt, in which CISA discusses activities such as the identification, tracking, and management of technical debt throughout the AI system SDLC. This means documenting architectural decisions, risk acceptance, and understanding the technical debt that is accruing throughout the systems life cycle. Historically organizations make decisions to accept technical debt early on, but never go back to address it, leaving it to accumulate, leading to performance and security risks down the road, much like accruing interest on a credit card that comes to wreak havoc over time, except in this case, the technical debt sits waiting to be exploited by malicious actors. 

Practices for Secure AI Deployment

After AI systems have been designed and developed, they get deployed into hosting environments and digital enterprises. The guidance cites risks such as the compromise of models and infrastructure or having incidents without sufficient incident response (IR) processes.

Specific practices cited include:

  • Securing your infrastructure
  • Continuously protecting models
  • Developing IR procedures
  • Releasing AI responsibly
  • Making it easy to do the right things

Infrastructure Security

Mature practices the guidance covers for protecting infrastructure include implementing least permissive access control to API’s, models and data and implementing segmentation of environments, especially those holding sensitive data. These practices of course align with principles from methodologies such as zero trust, which have an implicit deny approach and also look to limit the blast radius of incidents, should they occur. 

Continuously Protect Models

Models are a fundamental part of how AI systems function and provide value and the CISA publication cites activities such as ensuring malicious actors can’t compromise models directly or indirectly to tamper with the model, its data or its prompts. It cites measures such as model validation, cryptographic hashes and signatures and privacy enhancing technologies, such as homomorphic encryption. 

Incident Response Procedures

Much like any digital environment, no system is infallible, which is why IR procedures are key, this includes both the processes and procedures, execution of table top exercises and having sufficient backups to recover from incidents. There’s also an emphasis on providing fundamental security measures such as audit logs to customers and users with no extra charge, for their IR purposes, which of course is a nod to suppliers bearing more burden and cost than customers when it comes to Secure-by-Design/Default. 

Responsible AI Model Releases

The release of AI models should be done responsibly, ensuring proper testing and evaluation has occurred, including red teaming, which is specifically cited, and aligns with the recent AI Executive Order (EO) which calls for the practice as well. There’s also an emphasis on being transparent with users about both limitations and potential failure modes. 

Make it Easy to Do the Right Thing

Lastly, there is the practice of making it easy for users to do the right thing. This is described as having the most secure setting being the only option, or when additional configuration is necessary and possible, making it the secure option and implementing guardrails to prevent users from putting themselves at risk. This includes sufficient guidance about the model and system and clearly delineating shared responsibility models so users make no assumptions about what a provider is doing, leading to a false sense of security, as well as being transparent with regard to the data used, stored and who may be accessing it. This is a key call out, as many expressed concerns around the use of personal and sensitive data in AI models and systems, and fears of bias and abuse have been front and center.

Practices for Secure AI Operation and Maintenance

Last among the SDLC areas discussed is the secure operation and maintenance of AI systems and software. This means the system has been designed, developed and deployed and is now in operations. Ideally, security has been integrated throughout this SDLC, rather than is typical, where security is attempted to be bolted on to systems already in production environments. 

Key practices cited include:

  • Monitoring system’s behavior
  • Monitoring system’s inputs
  • Following a Secure-by-Design approach to updates
  • Collecting and sharing lessons learned

Monitoring System Behavior & Inputs

The first practice is a recognition of the importance of runtime security and observability. Identifying potentially malicious activity and deviations from expected behavior in models and systems. There’s also the need to monitor system inputs for various purposes from compliance and audit to identifying adversarial inputs and attempts to misuse systems. 

Secure-By-Design 

While Secure-by-Design is key in early SDLC phases such as design and development it must be a focal point for system updates as well. This includes activities such as using secure update and distribution procedures and supporting users throughout system upgrades and update processes through the use of versions API’s and previewing new features and functionality before release. 

Share Lessons Learned

Lastly is the need to collect and share lessons learned, which benefit broader communities of interest, information sharing and the ecosystem of AI system developers and consumers are potential risks, threats, vulnerabilities and lessons learned. 

Bringing it all together

CISA and NCSC’s Secure AI Guidelines represent a comprehensive approach to securing AI system development throughout the entire SDLC. Software suppliers producing AI systems leveraging this guidance can ensure that security is baked in from the onset and mitigate some of the prevailing security and privacy concerns around AI system development and use. 

The guidance also aligns broader efforts such as the Bletchley Declaration on AI Safety, which involved the U.S., U.K. China and 25 other nations to address potential risks from AI. These nations can leverage guidance such as the CISA/NCSC publication as well as additional cited resources throughout the guidance to adhere to the act and ensure society can benefit from the tremendous promise of AI while mitigating risks.

The Challenge

The Solution

The Impact

Get new posts in your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get new posts in your inbox.

Get new posts in your inbox.

Welcome to the resistance
Oops! Something went wrong while submitting the form.

Get new posts in your inbox.

Get new posts in your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get new posts in your inbox.