The Top 10 Blog Posts of 2025
嗯,用户让我帮忙总结一下这篇文章的内容,控制在一百个字以内。首先,我需要仔细阅读文章内容,理解其主要信息。 文章看起来是SEI博客2025年的十大热门帖子的回顾。这些帖子涵盖了软件采购、人工智能、威胁建模、机器学习测试与评估以及企业风险管理等领域。每一篇都有详细的描述,比如生成式AI在软件工程中的应用、威胁建模的实用指南、MLTE工具的介绍等等。 接下来,我需要将这些内容浓缩到100字以内。重点应该放在SEI的工作领域和具体的研究方向上。例如,生成式AI、威胁建模方法、MLTE工具、AI在国家安全中的应用、 insider威胁数据标准、DevSecOps成熟度模型、LLM文本摘要评估以及RF安全等方面。 确保语言简洁明了,不需要使用“文章内容总结”这样的开头,直接描述即可。同时,要涵盖主要的研究点和应用领域,让读者一目了然地了解SEI在2025年的主要工作方向。 最后,检查字数是否符合要求,并确保没有遗漏关键信息点。这样用户就能快速获取文章的核心内容了。 </think> SEI博客2025年十大热门帖子回顾了该机构在软件采购、人工智能、威胁建模、机器学习测试与评估及企业风险管理等领域的研究与实践,探讨了生成式AI在软件工程中的应用前景、威胁建模方法论、MLTE工具开发及其对ML系统安全性的提升等关键议题。 2026-1-12 05:0:0 Author: www.sei.cmu.edu(查看原文) 阅读量:0 收藏

Every January on the SEI Blog, we present the 10 most-visited posts from the previous year. This year’s top 10 list highlights the SEI’s work in software acquisition, artificial intelligence, threat modeling, machine learning test and evaluation, and enterprise risk management. The posts, all published in 2025, are presented below in reverse order based on the number of visits.

10. Perspectives on Generative AI in Software Engineering and Acquisition
by Anita Carleton, James Ivers, Ipek Ozkaya, John E. Robert, Douglas Schmidt (William & Mary), and Shen Zhang

In the realm of software engineering and software acquisition, generative AI promises to improve developer productivity and rate of production of related artifacts, and in some cases their quality. It is essential, however, that software and acquisition professionals learn how to apply AI-augmented methods and tools in their workflows effectively. This blog post focuses on the future of software engineering and acquisition using generative AI technologies, such as ChatGPT, DALL·E, and Copilot, and explores experts’ perspectives of applying generative AI in software engineering and acquisition. It is the latest in a series of blog posts on these topics.

The blog post includes perspectives from SEI Fellow Anita Carleton, director of the SEI Software Solutions Division, along with a group of SEI thought leaders on AI and software including James Ivers, principal engineer; Ipek Ozkaya, technical director of the Engineering Intelligent Software Systems group; John Robert, deputy director of the Software Solutions Division; Douglas Schmidt, who was the Director of Operational Test and Evaluation at the Department of Defense (DoD) and is now the inaugural dean of the School of Computing, Data Sciences, and Physics at William & Mary; and Shen Zhang, a senior engineer.

In his yearly reflection and anticipation blog post, CERT Director Greg Touhill calls upon his decades of experience as an information technology and cybersecurity senior executive and what he has learned leading the SEI’s CERT Division (one of the first organizations dedicated to cyber research and response) and channels the spirit of the nearby Punxsutawney Phil, that famous prognosticating Pennsylvania groundhog, to look into 2025 and forecast what we will likely reflect upon at the end of this year.

Read the post in its entirety.

8. Stop Imagining Threats, Start Mitigating Them: A Practical Guide to Threat Modeling
by Alex Vesey

When building a software-intensive system, a key part in creating a secure and robust solution is to develop a cyber threat model. Threat models are important because they guide requirements, system design, and operational choices. This blog post focuses on a method threat modelers can use to make credible claims about attacks the system could face and to ground those claims in observations of adversary tactics, techniques, and procedures (TTPs).

Read the post in its entirety.

7. Introducing MLTE: A Systems Approach to Machine Learning Test and Evaluation
by Alex Derr, Sebastián Echeverría, Katherine R. Maffey (AI Integration Center, U.S. Army), and Grace Lewis

Without proper testing, systems that contain machine learning components (ML-enabled systems, or ML systems for short) can fail in production, sometimes with serious real-world consequences. Testing and evaluation (T&E) of these systems can help determine if they will perform as expected—and desired—before going into production. However, ML systems are notoriously difficult to test for a variety of reasons, including challenges around properly defining requirements and evaluation criteria. As a result, there are currently few accepted best practices for testing ML systems. In this blog post, we introduce Machine Learning Test and Evaluation (MLTE), a new process and tool jointly developed by SEI and the Army AI Integration Center (AI2C) to mitigate this problem and create safer, more reliable ML systems.

Read the post in its entirety.

6. Artificial Intelligence in National Security: Acquisition and Integration
by Paige Rishel, Carol J. Smith, Brigid O’Hearn, and Rita C. Creel

As defense and national security organizations consider integrating AI into their operations, many acquisition teams are unsure of where to start. In June, the SEI hosted an AI Acquisition workshop. This blog post details practitioner insights from the workshop, including challenges in differentiating AI systems, guidance on when to use AI, and matching AI tools to mission needs.

A critical challenge in artificial intelligence is knowing when an AI system is operating outside its intended knowledge boundaries. This is the critical domain of out-of-distribution (OoD) detection—identifying when an AI system is facing situations it wasn’t trained to handle. Through our work here in the SEI’s AI Division, particularly in collaborating with the Office of the Under Secretary of Defense for Research and Engineering (OUSD R&E) to establish the Center for Calibrated Trust Measurement and Evaluation (CaTE), we’ve seen firsthand the critical challenges facing AI deployment in defense applications.

Read the post in its entirety.

4. Introducing the Insider Incident Data Exchange Standard (IIDES)
by Austin Whisnant

Recent research indicates that organizational insiders perpetrate 35 percent of data breaches, and malicious insider incidents cost organizations an average of $701,500 annually. The study and management of insider threat and risk remain areas of increasingly growing attention, prevalence, and concern, but capturing and sharing information about insider incidents in a standardized way has been a challenge for practitioners. A standard of incident classification and information sharing could allow practitioners to build, maintain, deidentify, and share insider threat case data with an eye toward building more robust data for analysis and insights that benefit their organizations and the whole community. In this post, we introduce the Insider Incident Data Exchange Standard (IIDES) schema for insider incident data collection, provide an example use case, and invite you to collaborate with us on its development.

Read the post in its entirety.

3. The DevSecOps Capability Maturity Model
by Timothy A. Chick, Brent Frye, and Aaron Reffett

Implementing DevSecOps can improve multiple aspects of the effectiveness of a software organization and the quality of the software for which it is responsible. Implementation of DevSecOps is a complex process, however, and the way a program evaluates progress in its DevSecOps implementation is important. We propose here a frame of reference for DevSecOps maturity, enabling organizations to focus on outcomes – value delivered – without excessive focus on compliance.

The Department of Defense’s (DoD) DevSecOps Documentation Set emphasizes program activities that speed delivery, tighten security, and improve collaboration across the software development lifecycle. Evaluating these activities against a set of characteristics, attributes, indicators, and patterns is not sufficient. It must be done within the context of value delivered. Therefore, in this blog post, we first define value in a DevSecOps context. Next, we describe how the DevSecOps Platform Independent Model (PIM) provides an authoritative reference model for evaluating an organization’s DevSecOps capability maturity. Finally, we provide a benchmark example of a DevSecOps capability profile.

Read the post in its entirety.

2. Evaluating LLMs for Text Summarization: An Introduction
by Shannon Gallagher, Swati Rallapalli, and Tyler Brooks

Large language models (LLMs) have shown tremendous potential across various applications. At the SEI, we study the application of LLMs to a number of DoD-relevant use cases. One application we consider is intelligence report summarization, where LLMs could significantly reduce the analyst cognitive load and, potentially, the extent of human error. However, deploying LLMs without human supervision and evaluation could lead to significant errors including, in the worst case, the potential loss of life. In this post, we outline the fundamentals of LLM evaluation for text summarization in high-stakes applications such as intelligence report summarization. We first discuss the challenges of LLM evaluation, give an overview of the current state of the art, and finally detail how we are filling the identified gaps at the SEI.

Read the post in its entirety.

  1. Radio Frequency 101: Can You Really Hack a Radio Signal?
    by Roxxanne White and Michael Bragg

In 2017, a malicious actor exploited the signals in Dallas’s emergency siren system and set off alarms for over 90 minutes. These types of attacks can affect devices that use radio frequency (RF) technology, from smart security systems to aircraft. RF also plays a critical role in many military systems such as navigation, radar, and communication systems. Common DoD use cases include satellite communication (SATCOM), radar, and tactical data links that help coordinate troop movements, signal position information about a target, or help maintain communication between aircraft and drones.

In this blog post, we explore some of the fundamentals of radio frequency communication, delve into the generalities of protocols and device interactions, discuss common RF tools, and uncover ways malicious actors can attack systems. We summarize the basics of RF technology and the risks associated with it, and we discuss how the SEI is helping to secure wireless communications.

Read the post in its entirety.

Looking Ahead in 2026

Learn more about our cutting-edge research by checking back weekly for posts highlighting the SEI’s work in artificial intelligence, machine learning, cybersecurity, software engineering, and vulnerability management.


文章来源: https://www.sei.cmu.edu/blog/the-top-10-blog-posts-of-2025/?utm_source=blog&utm_medium=rss&utm_campaign=my_site_updates
如有侵权请联系:admin#unsafe.sh