The AppSec Reset: Building Trust in AI-powered Software
November 25, 2025Author: Manoj Nair, Chief Innovation Officer, SnykIt feels li 2025-11-25 14:50:53 Author: www.guidepointsecurity.com(查看原文) 阅读量:8 收藏

Author: Manoj Nair, Chief Innovation Officer, Snyk

It feels like only yesterday we were grappling with the dawn of open source software and the security challenges it introduced. Now, as an industry, we’re at a similar crossroads, but with an even more powerful and transformative force: AI. 

The speed at which AI has been adopted in software development is astounding. According to one GitHub survey, a staggering 97% of developers have integrated generative AI tools into their workflows.

Generative AI has fundamentally shifted how software is created. We’re moving from a world where humans manually write code line-by-line to one where AI agents are orchestrating entire development workflows from high-level prompts. This shift is happening faster than most of us ever imagined.

The New Threat Landscape

This transformation brings a new set of security challenges. Traditional application security (AppSec) was built for a world of human-written code, where code was intentionally authored and manually reviewed. That model no longer holds up. 

AI-generated software introduces different kinds of vulnerabilities, and at a much faster pace. Here’s a look at some of the key new threats we’re facing:

  • Hallucinated packages: Generative AI models, trained on public code, don’t distinguish between safe and insecure code; they simply complete patterns. This can lead to “hallucinated” packages—references to libraries that don’t actually exist. 
    If these packages are published with malicious intent, they can be downloaded and deployed without any red flags. This type of package typosquatting is a real and growing threat.
  • Prompt injection and jailbreaking: With language as a new attack surface, malicious inputs can hijack the logic of a large language model (LLM). This is known as prompt injection.
    In addition, LLM jailbreaks can trick a model into ignoring safety constraints, potentially exposing internal systems, sensitive data, or access credentials.
  • Agentic drift: As we move toward autonomous AI agents, we face the risk of agentic drift, which occurs when an autonomous AI makes an unintended or unreviewed decision. Agents don’t ask for permission; they execute.
    Without proper guardrails, even well-intentioned goals can lead to dangerous outcomes. The more layers of abstraction between a prompt and the final outcome, the harder it is to verify if the behavior is on track.
  • AI model supply chain risks: AI models are the “new libraries,” and the AI model supply chain is largely unsecured. We need to know where our models came from, what data they were trained on, and who has access to their parameters.
    This introduces risks like model poisoning, where malicious behavior is embedded during fine-tuning. Another risk is data exfiltration via context, where a model accidentally leaks sensitive information from previous conversations. Unlike static, auditable code, AI models are dynamic, probabilistic, and often opaque.

Building AI Trust: A New Foundation

Given these new challenges, traditional AppSec is no longer sufficient. It’s not just about finding human mistakes anymore; it’s about understanding the probabilistic decisions made by machines. We need a new foundation for security that moves at the speed of AI.

Trustworthy AI is a shared and necessary foundation for engineering and security teams, enabling them to build secure code at the speed of AI. It’s about having the confidence to build fast and adopt new tools without losing control. This foundation is built on three pillars:

Visibility

You can’t govern what you can’t see. Visibility means knowing which AI systems are being used, how they’re integrated, and what models are involved. It’s no longer enough to know what code was written; you also need to know who—or what—wrote it.

Prioritization

Not every AI-related event is a critical security incident. Prioritization involves identifying and acting on high-impact issues like prompt injection, hallucinated packages, or unauthorized model access using context-aware risk scoring.

Governance

Trust isn’t about creating friction; it’s about enabling responsible speed through adaptive controls. Effective governance means defining policies that evolve with your stack and automatically adjust as new models or agents are introduced.

Laying the Groundwork for AI Trust Today

Building trust in AI tooling is a practice you can start today. Just as traditional AppSec evolved to support open source adoption and containerization, it must now extend to the AI layer. Here are five essential steps every team can take to begin:

  1. Build your AI Bill of Materials (AI-BoM): You can’t secure what you can’t inventory. An AI-BoM maps out the AI components in your systems—models, datasets, plugins, and agent frameworks. Start by scanning your codebase and infrastructure for LLM references and embedded agents, then document where they live and who owns them.
  2. Threat model your LLM workflows: LLMs are probabilistic systems, not traditional software, which introduces new attack surfaces. Begin by mapping inputs, outputs, and critical access points. Ask questions like, “What could an adversary manipulate?” and “Where does trust break down?”.
  3. Train your developers on AI security risks: Developers who treat AI-generated code as correct by default introduce many vulnerabilities. Since generative AI is trained to be useful, not secure, teams must learn to critically assess its suggestions. Introduce security training paths focused on Generative AI and prompt-based development and reinforce that “clean”-looking code can still carry hidden risks.
  4. Implement early policies for AI-generated code: It’s easier to guide behavior before habits set in. Create simple, enforceable policies about where AI-generated code can be used, how it’s reviewed, and when it needs to be flagged for inspection. These guidelines should be visible and actionable.
  5. Use human-in-the-loop review: Agent-based software is powerful, but autonomous behavior requires checks and balances. For agentic workflows, incorporate human-in-the-loop review before deployment to ensure safety and alignment.

The future of software is AI-powered, and the security of that software is still up to us. The teams that act early, set guardrails, and prioritize visibility from the start will be best positioned to build with confidence.

Key Takeaways

  • The threat landscape has evolved. New risks, such as hallucinated packages, prompt injection, agentic drift, and model supply chain vulnerabilities, are emerging, and traditional AppSec tools aren’t equipped to handle them.
  • Traditional security models are failing. Existing AppSec was built for a world of human-authored, manually reviewed code. It can’t keep up with the speed and probabilistic nature of AI-generated code, leading to AI fatigue for developers and security teams.
  • Building AI Trust is the new foundation for security. This shared approach is based on three pillars: Visibility to see where AI is being used, Prioritization to focus on critical risks, and Governance to implement adaptive guardrails.

The Path Forward

The age of autonomous development is here, and with it, a new era of security. The rules of software creation are being rewritten, and so are the risks. We can no longer rely on yesterday’s methods to secure tomorrow’s applications.

AI is a breakthrough for developers, offering instant productivity and faster delivery. For security teams, it introduces a new kind of uncertainty, making it crucial to establish a new foundation of trust. 

By trusting AI tooling, organizations can move at the speed of AI without losing control, ensuring that secure code is no longer a bottleneck but an integral part of the development process. The teams that build this foundation today will be the ones best positioned to thrive in the future of software development.Want to learn how Snyk can help you build AI trust? Check out our AI Trust Platform or book a demo today.


文章来源: https://www.guidepointsecurity.com/blog/snyk-appsec-reset/
如有侵权请联系:admin#unsafe.sh