The Cost of AI Velocity: 5 Actions Dev Leaders Must Take to Secure Their Codebase from AI Vulnerabilities
文章探讨了AI生成代码带来的安全风险及其对软件开发的影响。研究表明,48%的AI生成代码存在漏洞,97%的组织遭遇过AI相关安全事件。文章指出速度优先文化、缺乏治理和工具滞后构成“AI三重威胁”,建议重新定义KPI、加强治理、采用统一平台和自主AI工具以平衡速度与安全。 2025-8-31 08:32:31 Author: checkmarx.com(查看原文) 阅读量:18 收藏

Here’s a hypothetical for you: You discover a developer on your team produces code where 40-50% contains exploitable vulnerabilities. How long before your CTO calls you up for a serious talk?

AI-powered coding tools like GitHub Copilot and Cursor are transforming software development, enabling developers to generate code at unprecedented speeds. But they’re also generating code with exactly those vulnerability rates, and whether you like it or not, they’re part of your team.

Recent research underscores significant security risks in AI-generated code. The most recent academic research, Georgetown University Center for Security and Emerging Technology (CSET) report (November 2024), found that 48% of AI-generated code snippets from five major large language models contained vulnerabilities flagged by the ESBMC verification tool, highlighting the potential for malicious exploitation.

Similarly, IBM’s 2025 Cost of a Data Breach Report found that 97% of organizations reported an AI-related security incident.

Checkmarx’s own Future of AppSec in the Age of AI report corroborates these findings, with a direct correlation between AI-generated code, and a rise in almost all risk-related metrics:

  • 34% of organizations report that over 60% of their codebase is AI-generated, increasing exposure to vulnerabilities.
  • 81% knowingly ship vulnerable code to meet deadlines.
  • Only 18% enforce governance policies for AI tool usage.
  • 20% detect unapproved AI tool use, constituting true Shadow AI.
  • 98% experienced at least one security breach in the past year.

Even before AI entered the picture, security was already struggling to keep pace with accelerating development cycles. Now, with the volume and velocity of AI-generated code, today’s AppSec structures aren’t just lagging. They’re fundamentally unprepared.

With unvetted AI-generated code flooding production environments, dev leaders face a critical choice: continue the perilous trade-off between velocity and security, or take decisive action.

The AI Trifecta: The Three Forces Derailing Security in AI-Generated Development

As Heads of Development face mounting pressure to ship faster using AI-generated code, they’re also expected to maintain security, stability, and compliance. But three structural challenges are quietly undermining their ability to do both.

These challenges—velocity without security alignment, lack of AI governance, and under-integrated AppSec tooling—form what we call the AI Trifecta: a convergence of forces that puts organizations on a collision course with risk. Addressing just one isn’t enough. To create a sustainable balance between speed and security, all three must be tackled in parallel.

Speed-First Culture Breeds Structural Risk

The vulnerability rates in AI-generated code would be concerning enough on their own. But as Checkmarx’s report finds, they’re combined with an increasingly alarming reality: organizations aren’t just accidentally shipping vulnerable code—they’re doing it knowingly, systematically, and at accelerating rates.

As mentioned above, 81% of organizations knowingly ship vulnerable code. Not because they want to, but because external pressures make it feel necessary, while developers’ optimism bias leads teams to underestimate the real risk:

  • 38% of organizations deploy vulnerable code to meet feature deadlines.
  • 33% of developers admit to hoping vulnerabilities “won’t be discovered”—a sharp rise from 15% in 2024.

This reveals a systemic underlying problem. KPIs that prioritize feature delivery over security, compressed review timelines, and unrestricted AI tool access all incentivize dangerous shortcuts. When AI can generate code faster than security teams can review it, the pressure to “ship now, fix later” becomes overwhelming—and eventually leads to “ship now, fix never, get breached eventually.”

The result is mounting security debt: increased breach exposure, operational drag from emergency patches, and eroded customer trust. IBM’s 2025 Report mentioned above puts the average breach cost at $4.4 million, with vulnerabilities in custom code being a significant contributor.

The AI Governance Crisis: Development Without Guardrails

Checkmarx’s report also reveals a staggering governance gap that should alarm every development leader:

  • Only 18% of organizations have established approved AI tool lists
  • 20% detect unapproved AI tool use—constituting true Shadow AI
  • Only 18% enforce governance policies for AI tool usage
  • 82% lack comprehensive oversight of AI development tools

IBM’s report paints a slightly optimistic, yet still concerning picture, where “only” 63% of organizations lacked AI governance policies to manage AI or prevent the proliferation of shadow AI.

This represents a fundamental breakdown in risk management. The majority of organizations have essentially handed over significant portions of their codebase to ungoverned AI systems, creating massive blind spots in security oversight.

AppSec Tools Lag Behind Developer Velocity

AppSec Tools Lag Behind Developer Velocity

Despite years of talk around DevSecOps, the reality on the ground tells a different story: most security tooling still isn’t keeping pace with the way modern development teams work.

Key security practices—like Dynamic Application Security Testing (DAST), Infrastructure-as-Code (IaC) scanning, and container security—are adopted by fewer than 50% of organizations. And even when in use, they’re often bolted on after the fact, rather than embedded into the daily development workflow.

The result? Security becomes an external gate, not an integrated part of the build process. Tools that aren’t wired into IDEs, pull requests, or CI/CD pipelines are easily ignored or deprioritized under delivery pressure.

To secure AI-accelerated development, AppSec needs to live where the code lives—in the hands of developers, in real time, as part of the flow. Without that, every “shift-left” promise is just a theory.

Five High-Impact Actions for Heads of Development

To navigate the AI-Gen Trifecta, Heads of Development must drive strategic change, align cross-functional teams, and overcome organizational resistance.

The following five actions provide actionable, evidence-backed strategies to balance AI-driven productivity with robust security, tailored to the leadership challenges of managing teams, budgets, and stakeholders.

1.      Redefine Success Metrics to Counter Speed-First Culture


Shift KPIs to prioritize security alongside velocity, addressing the speed-first culture’s risks. Implement metrics like:

  • Fix rate by vulnerability severity: Ensure 90% resolution of high-severity issues pre-release, using CVSS for code vulnerabilities.
  • AI-specific risk scores (AIVSS): Adopt the OWASP AIVSS framework to quantify agentic AI risks (e.g., prompt injection, context poisoning), targeting a <10% rate of high-risk AI behaviors in production code.
  • Mean Time to Remediate (MTTR): Aim for under 48 hours for critical vulnerabilities, per NIST guidelines (NIST, 2024).
  • Releases with unresolved vulnerabilities: Target <5% to minimize risk exposure.

2.       Enforce AI Tool Governance to Close the Governance Gap

To close the governance gap, Heads of Development must collaborate with AppSec teams and CISOs to design policies that secure AI usage without disrupting developer workflows. Implement:

  • Approved AI tool lists: Restrict usage to vetted platforms, ensuring integration with enterprise security policies (e.g., SSO, encryption). Conduct a 90-day audit to identify and phase out unapproved tools, reducing shadow AI risks.
  • Prompt transparency and audit trails: log AI-generated code and prompts, incorporating OWASP AIVSS scores to assess behavioral risks (e.g., autonomy, tool misuse).
  • Commit-time scanning: Deploy real-time vulnerability scanning in CI/CD pipelines to catch AI-specific issues like prompt injection, with AIVSS-guided prioritization.
    Collaboration Process:
  • Form a Governance Task Force: Create a cross-functional team with AppSec, CISOs, developers, and legal to define policies and address both technical and behavioral AI risks.
  • Align on DORA Metrics: Work with CISOs to balance security with velocity and eliminate friction using DORA metrics as a mutual guide.
  • Invest in ongoing developer education around AI-assisted coding and emerging AppSec risks. Ensure your teams understand how AI-generated code can introduce new threat vectors, and provide training on secure usage practices, threat modeling, and mitigation techniques.
  • Pilot and Iterate: Start with a pilot in a high-risk business unit, using AIVSS to score AI tool risks and refine policies. Scale after 90 days, incorporating developer feedback to minimize friction.

3.      Drive Adoption of Unified, Developer-Native AppSec Platforms

Heads of Development must champion the adoption of a unified AppSec platform that lives in developers’ IDEs, covers diverse risks (code, AI, infrastructure), and prioritizes alerts based on exploitability to avoid overwhelming teams:

  1. Advocate for a Unified Platform: Push for unified platforms like that integrate SAST, SCA, DAST, and IaC scanning into IDEs and CI/CD pipelines. These platforms correlate findings across code, open-source libraries, APIs, and cloud environments, making it easier to bridge the gap between dev and application security.
  2. Prioritize Exploitability-Based Alerts: Ensure the platform prioritizes high-impact vulnerabilities, based on contextualized exploitability.
  3. Collaborate with AppSec and CISOs: Form a cross-functional council with AppSec, CISOs, and developers to select a platform that aligns with organizational needs.
  4. Pilot and Scale: Start with a 60-day pilot in a high-velocity team, testing IDE-integrated tools and measuring DORA metrics improvements (e.g., 2x Deployment Frequency). Scale to other teams after validating developer adoption. Address developer resistance by involving them in tool selection, customizing alerts to their workflows.

4.       Champion Application Security Posture Management (ASPM) for Strategic Oversight

ASPM tools provide unified visibility across custom code, open-source libraries, APIs, cloud environments, and AI-driven systems, addressing all three AI-Gen Trifecta risks (speed, governance, tooling).

Heads of Development must champion ASPM adoption to reduce risk exposure, align security with business goals, and support developer velocity, leaving technical implementation to AppSec teams:

  • Advocate for ASPM Adoption: Push for ASPM platforms that integrate with IDEs (e.g., VS Code) and CI/CD pipelines, correlating CVSS and AIVSS scores to prioritize exploitable vulnerabilities (e.g., AI agent autonomy, code-based SQL injection).
  • Ensure Developer-Friendly Integration: Mandate that ASPM tools deliver real-time, actionable alerts within developer workflows, minimizing context switching and supporting DORA metrics like Lead Time for Changes.
  • Lead Cross-Functional Alignment: Form a governance council with AppSec, CISOs, developers, and business leaders to define ASPM requirements, ensuring alignment with NIST AI RMF and organizational priorities.
  • Pilot and Scale Strategically: Launch a 90-day ASPM pilot in a critical business unit, measuring reductions in breach risk and DORA metric improvements (e.g., 40% lower Change Failure Rate). Scale enterprise-wide after validating ROI.

5.      Leverage Agentic AI for Scalable Security

According to a recent IDC report,in response to the risks of Ai-gen code there is a growing shift toward Appsec use of agentic: autonomous, role-specific agents that operate within the tools developers and AppSec teams already use.

These agents are designed not to scan after the fact, but to prevent vulnerabilities in real time—from code creation to policy enforcement to executive visibility.


Agentic AI security tools dedicated to being integrated into the developer’s workflow, like Checkmarx One Developer Assist, automated real-time detection, remediation, and policy enforcement at commit time.

Agentic AI helps your developers maintain velocity while mitigating the associated risk of AI-generated code, by:

  1. Embedding Real-Time Vulnerability Detection in IDEs: The Developer Assist Agent integrates into IDEs to scan AI-generated code as it’s written, identifying vulnerabilities like SQL injection or prompt injection within seconds without disrupting developer workflows.
  2. Providing Guided Remediation with Actionable Code Fixes: The agent uses generative AI to suggest tailored code snippets for fixing vulnerabilities directly in the IDE, with confidence scores (0–100) indicating exploitability.
  3. Enabling Efficient, Customized Security Queries: AI Query Builders allows developers to create tailored security queries using natural language, scanning AI-generated code and open-source libraries for malicious packages and vulnerabilities.
  4. Enhancing Governance with Real-Time Code Validation: Agentic AI can ensure compliance with secure coding policies, thereby mitigating the governance gap and shadow AI risks, helping alignment with NIST AI RMF.

Speed Without Safety Is Unsustainable

Every data point in this report points to the same inflection point: AI-generated development isn’t just outpacing security. It’s running circles around it.

 Heads of Development are no longer just responsible for delivering fast—they’re also responsible for delivering safely at scale. That means rethinking how and where security fits into their developers’ workflow.

AI velocity comes attached with a bill. And it’s up to development leaders to ensure that the bill isn’t paid with breaches and erosion of trust.


文章来源: https://checkmarx.com/blog/the-cost-of-ai-velocity-5-actions-dev-leaders-must-take-to-secure-their-codebase-from-ai-vulnerabilities/
如有侵权请联系:admin#unsafe.sh