NDSS 2025 – Generating API Parameter Security Rules With LLM For API Misuse Detection
嗯,用户让我总结一下这篇文章的内容,控制在100个字以内。首先,我需要通读文章,抓住主要信息。 文章标题是“Generating API Parameter Security Rules with LLM for API Misuse Detection”,看起来是关于API安全规则生成的。作者们提出了一种新框架GPTAid,利用大语言模型来生成API参数安全规则,用于检测滥用。 接下来,文章提到现有的方法存在信息缺失和分析启发式有限的问题,导致遗漏了一些安全规则。而GPTAid通过动态执行和代码差异分析来验证和细化生成的规则,提高了准确性。 实验部分显示,在200个API的数据集上达到了92.3%的精度,并且比现有检测器多生成了6倍的安全规则。在47个应用中发现了210个潜在的安全漏洞,其中150个被开发者确认。 所以,总结的时候需要涵盖:提出GPTAid框架、利用LLM生成安全规则、动态执行和代码分析验证、高精度和发现大量漏洞这些关键点。 现在,把这些信息浓缩到100字以内。确保不使用“这篇文章”这样的开头,直接描述内容。 </think> 研究者提出了一种新框架GPTAid,利用大语言模型生成API参数安全规则,并通过动态执行和代码差异分析验证其正确性。实验表明该方法在检测API滥用方面具有高精度和高效性。 2026-2-23 16:0:0 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

Session 13B: API Security

Authors, Creators & Presenters: Jinghua Liu (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yi Yang (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Miaoqian Lin (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China)
PAPER
Generating API Parameter Security Rules with LLM for API Misuse Detection
When utilizing library APIs, developers should follow the API security rules to mitigate the risk of API misuse. API Parameter Security Rule (APSR) is a common type of security rule that specifies how API parameters should be safely used and places constraints on their values. Failure to comply with the APSRs can lead to severe security issues, including null pointer dereference and memory corruption. Manually analyzing numerous APIs and their parameters to construct APSRs is labor-intensive and needs to be automated. Existing studies generate APSRs from documentation and code, but the missing information and limited analysis heuristics result in missing APSRs. Due to the superior Large Language Model’s (LLM) capability in code analysis and text generation without predefined heuristics, we attempt to utilize it to address the challenge encountered in API misuse detection. However, directly utilizing LLMs leads to incorrect APSRs which may lead to false bugs in detection, and overly general APSRs that could not generate applicable detection code resulting in many security bugs undiscovered. In this paper, we present a new framework, named GPTAid, for automatic APSRs generation by analyzing API source code with LLM and detecting API misuse caused by incorrect parameter use. To validate the correctness of the LLM-generated APSRs, we propose an execution feedback-checking approach based on the observation that security-critical API misuse is often caused by APSRs violations, and most of them result in runtime errors. Specifically, GPTAid first uses LLM to generate raw APSRs and the Right calling code, and then generates Violation code for each raw APSR by modifying the Right calling code using LLM. Subsequently, GPTAid performs dynamic execution on each piece of Violation code and further filters out the incorrect APSRs based on runtime errors. To further generate concrete APSRs, GPTAid employs a code differential analysis to refine the filtered ones. Particularly, as the programming language is more precise than natural language, GPTAid identifies the key operations within Violation code by differential analysis, and then generates the corresponding concrete APSR based on the aforementioned operations. These concrete APSRs could be precisely interpreted into applicable detection code, which proven to be effective in API misuse detection. Implementing on the dataset containing 200 randomly selected APIs from eight popular libraries, GPTAid achieves a precision of 92.3%. Moreover, it generates 6 times more APSRs than state-of-the-art detectors on a comparison dataset of previously reported bugs and APSRs. We further evaluated GPTAid on 47 applications, 210 unknown security bugs were found potentially resulting in severe security issues (e.g., system crashes), 150 of which have been confirmed by developers after our reports.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.

Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations’ YouTube Channel.

Permalink

*** This is a Security Bloggers Network syndicated blog from Infosecurity.US authored by Marc Handelman. Read the original post at: https://www.youtube-nocookie.com/embed/HK8jjKXnDD4?si=7DpyDqJ9YoFlAbId


文章来源: https://securityboulevard.com/2026/02/ndss-2025-generating-api-parameter-security-rules-with-llm-for-api-misuse-detection/
如有侵权请联系:admin#unsafe.sh