SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks In Split Learning
嗯,用户让我帮忙总结一下这篇文章的内容,控制在100字以内,而且不需要用“文章内容总结”之类的开头。首先,我需要仔细阅读文章,抓住主要信息。 文章标题是“SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning”,看起来是关于一种新的防御方法。作者们来自达姆施塔特工业大学,他们在Split Learning(SL)框架下提出了SafeSplit。 Split Learning是一种分布式深度学习方法,允许客户端和服务器在不共享私有数据的情况下共同训练和推理。但这种架构容易受到客户端的后门攻击,恶意客户端可以篡改数据影响模型行为。现有的防御措施不适用于SL,所以他们开发了SafeSplit。 SafeSplit通过两种分析来检测恶意行为:静态分析和动态分析。静态分析在频率域测量层参数差异,动态分析引入旋转距离度量评估参数变化。实验结果显示这种方法有效且保持模型性能。 所以总结起来,就是介绍了一种新的防御方法SafeSplit,用于检测和过滤恶意客户端的后门攻击,在SL框架下有效且保持模型性能。 </think> 文章介绍了一种名为SafeSplit的新防御方法,用于检测和过滤恶意客户端在Split Learning框架中的后门攻击。通过静态和动态分析结合的方法,该技术能够有效识别并防止中毒模型的传播,同时保持模型的实用性。 2025-12-10 20:0:0 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Session 5C: Federated Learning 1

Authors, Creators & Presenters: Phillip Rieger (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Kavita Kumari (Technical University of Darmstadt), Tigist Abera (Technical University of Darmstadt), Jonathan Knauer (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)
PAPER
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning
Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN) without requiring clients to share their private local data. The DNN is partitioned in SL, with most layers residing on the server and a few initial layers and inputs on the client side. This configuration allows resource-constrained clients to participate in training and inference. However, the distributed architecture exposes SL to backdoor attacks, where malicious clients can manipulate local datasets to alter the DNN’s behavior. Existing defenses from other distributed frameworks like Federated Learning are not applicable, and there is a lack of effective backdoor defenses specifically designed for SL. We present SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL). SafeSplit enables the server to detect and filter out malicious client behavior by employing circular backward analysis after a client’s training is completed, iteratively reverting to a trained checkpoint where the model under examination is found to be benign. It uses a two-fold analysis to identify client-induced changes and detect poisoned models. First, a static analysis in the frequency domain measures the differences in the layer’s parameters at the server. Second, a dynamic analysis introduces a novel rotational distance metric that assesses the orientation shifts of the server’s layer parameters during training. Our comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates the high efficacy of this dual analysis in mitigating backdoor attacks while preserving model utility.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.

Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations’ YouTube Channel.

Permalink

*** This is a Security Bloggers Network syndicated blog from Infosecurity.US authored by Marc Handelman. Read the original post at: https://www.youtube-nocookie.com/embed/eMbVkGZo-bI?si=5pROfgOgQdTmxpAT


文章来源: https://securityboulevard.com/2025/12/safesplit-a-novel-defense-against-client-side-backdoor-attacks-in-split-learning/
如有侵权请联系:admin#unsafe.sh