NDSS 2025 – The Midas Touch: Triggering The Capability Of LLMs For RM-API Misuse Detection
嗯,用户让我总结一下这篇文章的内容,控制在100字以内。首先,我需要仔细阅读文章,抓住主要信息。文章标题是“API安全”,作者来自中国科学院的信息工程研究所和网络空间安全学院。论文题目是“点石成金:触发LLMs检测RM-API误用的能力”。 文章主要讨论了软件资源管理中的API安全问题。传统的基于文档和代码的方法在检测API误用时存在局限性,比如文档方法无法发现不规则的约束,代码方法容易产生误报。因此,研究者提出利用大语言模型(LLMs)来提高检测效果。 不过,直接使用LLMs会有幻觉问题,导致错误的约束和误报。于是他们提出了一个名为ChatDetector的解决方案,结合了ReAct框架和CoT方法,将任务分解为API识别、对象提取和API配对。通过这种方法,ChatDetector在准确性和召回率上都有显著提升,并且发现了115个安全漏洞。 最后,文章指出这是首次利用LLMs进行API误用检测的研究,展示了其潜力,并为未来的研究指明了方向。 总结的时候要注意控制字数,突出研究重点、方法创新和结果。避免使用过于专业的术语,让内容简洁明了。 </think> 该研究提出了一种基于大语言模型(LLMs)的自动化解决方案ChatDetector,用于检测软件资源管理(RM)中的API误用问题。通过结合ReAct框架和Chain-of-Thought方法,该方案能够有效提取API约束并减少幻觉效应带来的错误。实验表明,ChatDetector在准确性和召回率上显著优于现有方法,并成功识别出115个潜在安全漏洞。 2026-2-22 16:0:0 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

Session 13B: API Security

Authors, Creators & Presenters: Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Jinghua Liu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Miaoqian Lin (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China)
PAPER
The Midas Touch: Triggering the Capability of LLMs for RM-API Misuse Detection
As the basis of software resource management (RM), strictly following the RM-API constraints guarantees secure resource management and software. To enhance the RM-API application, researchers find it effective in detecting RM-API misuse on open-source software according to RM-API constraints retrieved from documentation and code. However, the current pattern-matching constraint retrieval methods have limitations: the documentation-based methods leave many API constraints irregularly distributed or involving neutral sentiment undiscovered; the code-based methods result in many false bugs due to incorrect API usage since not all high-frequency usages are correct. Therefore, people propose to utilize Large Language Models (LLMs) for RM-API constraint retrieval with their potential on text analysis and generation. However, directly using LLMs has limitations due to the hallucinations. The LLMs fabricate answers without expertise leaving many RM APIs undiscovered and generating incorrect answers even with evidence introducing incorrect RM-API constraints and false bugs. In this paper, we propose an LLM-empowered RM-API misuse detection solution, ChatDetector, which fully automates LLMs for documentation understanding which helps RM-API constraints retrieval and RM-API misuse detection. To correctly retrieve the RM-API constraints, ChatDetector is inspired by the ReAct framework which is optimized based on Chain-of-Thought (CoT) to decompose the complex task into allocation APIs identification, RM-object (allocated/released by RM APIs) extraction and RM-APIs pairing (RM APIs usually exist in pairs). It first verifies the semantics of allocation APIs based on the retrieved RM sentences from API documentation through LLMs. Inspired by the LLMs’ performance on various prompting methods, ChatDetector adopts a two-dimensional prompting approach for cross-validation. At the same time, an inconsistency-checking approach between the LLMs’ output and the reasoning process is adopted for the allocation APIs confirmation with an off-the-shelf Natural Language Processing (NLP) tool. To accurately pair the RM-APIs, ChatDetector decomposes the task again and identifies the RM-object type first, with which it can then accurately pair the releasing APIs and further construct the RM-API constraints for misuse detection. With the diminished hallucinations, ChatDetector identifies 165 pairs of RM-APIs with a precision of 98.21% compared with the state-of-the-art API detectors. By employing a static detector CodeQL, we ethically report 115 security bugs on the applications integrating on six popular libraries to the developers, which may result in severe issues, such as Denial-of-Services (DoS) and memory corruption. Compared with the end-to-end benchmark method, the result shows that ChatDetector can retrieve at least 47% more RM sentences and 80.85% more RM-API constraints. Since no work exists specified in utilizing LLMs for RM-API misuse detection to our best knowledge, the inspiring results show that LLMs can assist in generating more constraints beyond expertise and can be used for bug detection. It also indicates that future research could transfer from overcoming the bottlenecks of traditional NLP tools to creatively utilizing LLMs for security research.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.

Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations’ YouTube Channel.

Permalink

*** This is a Security Bloggers Network syndicated blog from Infosecurity.US authored by Marc Handelman. Read the original post at: https://www.youtube-nocookie.com/embed/99zzZP9hXUQ?si=ESh521qeXkVPznre


文章来源: https://securityboulevard.com/2026/02/ndss-2025-the-midas-touch-triggering-the-capability-of-llms-for-rm-api-misuse-detection/
如有侵权请联系:admin#unsafe.sh