Observability for AI Systems: Strengthening visibility for proactive risk detection
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内,并且不需要特定的开头。首先,我得仔细阅读这篇文章,理解它的主要观点。 文章主要讨论了生成式AI(GenAI)和自主式AI在企业中的应用加速,以及它们如何成为企业基础设施的核心部分。接着,它强调了可观测性(Observability)对于这些AI系统的重要性,特别是在安全和治理方面。文章指出,传统的可观测性工具无法有效应对AI系统的复杂性和不确定性,因此需要新的方法来监控和管理这些系统。 作者还提到微软在SDL(安全开发周期)中引入了增强的AI可观测性实践,并详细说明了如何通过日志、指标、跟踪、评估和治理来实现这一点。最后,文章总结了可观测性带来的好处,包括提升安全性、故障排除效率和质量优化。 现在,我需要将这些要点浓缩到100字以内。首先,明确主题:生成式AI和自主式AI的应用及其对可观测性的需求。然后,指出传统方法的不足和微软的解决方案。最后,强调可观测性带来的好处。 可能的结构是:文章讨论了生成式AI和自主式AI在企业中的快速应用及其对核心基础设施的影响。它强调了传统方法无法满足的可观测性需求,并提出了基于日志、指标、跟踪、评估和治理的新方法。微软通过SDL引入了增强的实践,并强调可观测性对安全性和效率的重要性。 检查字数是否在限制内,并确保语言简洁明了。 </think> 文章探讨了生成式 AI 和自主式 AI 在企业中的快速应用及其对核心基础设施的影响。它强调了传统方法无法满足的可观测性需求,并提出了基于日志、指标、跟踪、评估和治理的新方法。微软通过 SDL 引入了增强的实践,并强调可观测性对安全性和效率的重要性。 2026-3-18 16:0:0 Author: www.microsoft.com(查看原文) 阅读量:2 收藏

Adoption of Generative AI (GenAI) and agentic AI has accelerated from experimentation into real enterprise deployments. What began with copilots and chat interfaces has quickly evolved into powerful business systems that autonomously interact with sensitive data, call external APIs, connect to consequential tools, initiate workflows, and collaborate with other agents across enterprise environments. As these AI systems become core infrastructure, establishing clear, continuous visibility into how these systems behave in production can help teams detect risk, validate policy adherence, and maintain operational control.

Observability is one of the foundational security and governance requirements for AI systems operating in production. Yet many organizations don’t understand the critical importance of observability for AI systems or how to implement effective AI observability. That mismatch creates potential blind spots at precisely the moment when visibility matters most.

In February, Microsoft Corporate Vice President and Deputy Chief Information Security Officer, Yonatan Zunger, blogged about expanding Microsoft’s Secure Development Lifecycle (SDL) to address AI-specific security concerns. Today, we continue the discussion with a deep dive into observability as a necessity for the secure development of GenAI and agentic AI systems.

For additional context, read the Secure Agentic AI for Your Frontier Transformation blog that covers how to manage agent sprawl, strengthen identity controls, and improve governance across your tenant.

Observability for AI systems

In traditional software, client apps make structured API calls and backend services execute predefined logic. Because code paths follow deterministic flows, traditional observability tools can surface straightforward metrics like latency, errors, and throughput to track software performance in production.

GenAI and agentic AI systems complicate this model. AI systems are probabilistic by design and make complex decisions about what to do next as they run. This makes relying on predictable finite sets of success and failure modes much more difficult. We need to evolve the types of signals and telemetry collected so that we can accurately understand and govern what is happening in an AI system.

Consider this scenario: an email agent asks a research agent to look up something on the web. The research agent fetches a page containing hidden instructions and passes the poisoned content back to the email agent as trusted input. The email agent, now operating under attacker influence, forwards sensitive documents to unauthorized recipients, resulting in data exfiltration.

In this example, traditional health metrics stay green: no failures, no errors, no alerts. The system is working exactly as designed… except a boundary between untrusted external content and trusted agent context has been compromised.

This illustrates how AI systems require a unique approach to observability. Without insights into how context was assembled at each step—what was retrieved, how it impacted model behavior, and where it propagated across agents—there is no way to detect the compromise or reconstruct what occurred.

Traditional monitoring, built around uptime, latency, and error rates, can miss the root cause here and provide limited signal for attribution or reconstruction in AI-related scenarios. This is an example of one of the new categories of risk that the SDL must now account for, and it is why Microsoft has incorporated enhanced AI observability practices within our secure development practices.

Traditional observability versus AI observability

Observability of AI systems means the ability to monitor, understand, and troubleshoot what an AI system is doing, end-to-end, from development and evaluation to deployment and operation. Traditional services treat inputs as bounded and schema-defined. In AI systems, input is assembled context. This includes natural language instructions plus whatever the system pulls in and acts on, such as system and developer instructions, conversation history, outputs returned from tools, and retrieved content (web pages, emails, documents, tickets).

For AI observability, context is key: capture which input components were assembled for each run, including source provenance and trust classification, along with the resulting system outputs.

Traditional observability is often optimized for request-level correlation, where a single request maps cleanly to a single outcome, with correlation captured inside one trace. In AI systems, dangerous failures can unfold across many turns. Each step looks harmless until the conversation ramps into disallowed output, as we’ve seen in multi-turn jailbreaks like Crescendo.

For AI observability, best practices call for propagating a stable conversation identifier across turns, preserving trace context end-to-end, so outcomes can be understood within the full conversational narrative rather than in isolation. This is “agent lifecycle-level correlation,” where the span of correlation should be the same as the span of persistent memory or state within the system.

Defining AI system observability

Traditional observability is built on logs, metrics, and traces. This model works well for conventional software because it’s optimized around deterministic, quantifiable infrastructure and service behavior such as availability, latency, throughput, and discrete errors.

AI systems aren’t deterministic. They evaluate natural language inputs and return probabilistic results that can differ subtly (or significantly) from execution to execution. Logs, metrics, and traces still apply here, but what gets captured within them is different. Observability for AI systems updates traditional observability to capture AI-native signals.

Logs, metrics, and traces indicate what happened in the AI system at runtime.

  • Logs capture data about the interaction: request identity context, timestamp, user prompts and model responses, which agents or tools were invoked, which data sources were consulted, and so on. This is the core information that tells you what happened. User prompts and model responses are often the earliest signal of novel attacks before signatures exist, and are essential for identifying multi-turn escalation, verifying whether attacks changed system behavior, adjudicating safety detections, and reconstructing attack paths. User-prompt and model-response logs can reveal the exact moment an AI agent stops following user intent and starts obeying attacker-authored instructions from retrieved content.
  • Metrics measure traditional performance details like latency, response times, and errors as well as AI-specific information such as token usage, agent turns, and retrieval volume. This information can reveal issues such as unauthorized usage or behavior changes due to model updates.
  • Traces capture the end-to-end journey of a request as an ordered sequence of execution events, from the initial prompt through response generation. Without traces, debugging an agent failure means guessing which step went wrong.

AI observability also incorporates two new core components: evaluation and governance.

  • Evaluation measures response quality, assesses whether outputs are grounded in source material, and evaluates whether agents use tools correctly. Evaluation gives teams measurable signals to help understand agent reliability, instruction alignment, and operational risk over time.
  • Governance is the ability to measure, verify, and enforce acceptable system behavior using observable evidence. Governance uses telemetry and control plane mechanisms to ensure that the system supports policy enforcement, auditability, and accountability.

These key components of observability give teams improved oversight of AI systems, helping them ship with greater confidence, troubleshoot faster, and tune quality and cost over time.  

Operationalizing AI observability through the SDL

The SDL provides a formal mechanism by which technology leaders and product teams can operationalize observability. The following five steps can help teams implement observability in their AI development workflows.

  1. Incorporate AI observability into your secure development standards. Observability standards for GenAI and agentic AI systems should be codified requirements within your development lifecycle; not discretionary practices left to individual teams.
  2. Instrument from the start of development. Build AI-native telemetry into your system at design time, not after release. Aligning with industry conventions for logging and tracing, such as OpenTelemetry (OTel) and its GenAI semantic conventions, can improve consistency and interoperability across frameworks. For implementation in agentic systems, use platform-native capabilities such as Microsoft Foundry agent tracing (in preview) for runtime trace diagnostics in Foundry projects. For Microsoft Agent 365 integrations, use the OTel-based Microsoft Agent 365 Observability SDK (in Frontier preview) to emit telemetry into Agent 365 governance workflows.
  3. Capture the full context. Log user prompts and model responses, retrieval provenance, what tools were invoked, what arguments were passed, and what permissions were in effect. This detail can help security teams distinguish a model error from an exploited trust boundary and enables end-to-end forensic reconstruction. What to capture and retain should be governed by clear data contracts that balance forensic needs against privacy, data residency, retention requirements, and compliance with legal and regulatory obligations, with access controls and encryption aligned to enterprise policy and risk assessments.
  4. Establish behavioral baselines and alert on deviation. Capture normal patterns of agent activity—tool call frequencies, retrieval volumes, token consumption, evaluation score distributions—through Azure Monitor and Application Insights or similar services. Alert on meaningful departures from those baselines rather than relying solely on static error thresholds.
  5. Manage enterprise AI agents. Observability alone cannot answer every question. Technology leaders need to know how many AI agents are running, whether those agents are secure, and whether compliance and policy enforcement are consistent. Observability, when coupled with unified governance, can support improved operational control. Microsoft Foundry Control Plane, for example, consolidates inventory, observability, compliance with organization-defined AI guardrail policies, and security into one role-aware interface; Microsoft Agent 365 (in Frontier preview) provides tenant-level governance in the Microsoft 365 admin plane.

To learn more about how Microsoft can help you manage agent sprawl, strengthen identity controls, and improve governance across your tenant, read the Secure Agentic AI for Your Frontier Transformation blog.

Benefits for security teams

Making enterprise AI systems observable transforms opaque model behavior into actionable security signals, strengthening both proactive risk detection and reactive incident investigation.

When embedded in the SDL, observability becomes an engineering control. Teams define data contracts early, instrument during design and build, and verify before release that observability is sufficient for detection and incident response. Security testing can then validate that key scenarios such as indirect prompt injection or tool-mediated data exfiltration are surfaced by runtime protections and that logs and traces enable end-to-end forensic reconstruction of event paths, impact, and control decisions.  

Many organizations already deploy inference-time protections, such as Microsoft Foundry guardrails and controls. Observability complements these protections, enabling fast incident reconstruction, clear impact analysis, and measurable improvement over time. Security teams can then evaluate how systems behave in production and whether controls are working as intended.

Adapting traditional SDL and monitoring practices for non-deterministic systems doesn’t mean reinventing the wheel. In most cases, well-known instrumentation practices can be simply expanded to capture AI-specific signals, establish behavioral baselines, and test for detectability. Standards and platforms such as OpenTelemetry and Azure Monitor can support this shift.

AI observability should be a release requirement. If you cannot reconstruct an agent run or detect trust-boundary violations from logs and traces, the system may not be ready for production.


文章来源: https://www.microsoft.com/en-us/security/blog/2026/03/18/observability-ai-systems-strengthening-visibility-proactive-risk-detection/
如有侵权请联系:admin#unsafe.sh