PFL-ALP: Personalized Federated Learning Against Backdoor Attacks via Attention-based Local Purification


Journal article


Yifeng Jiang, Xiaochen Yuan, Weiwen Zhang, Wei Ke, Chan-Tong Lam, Sio-Kei Im
IEEE Transactions on Information Forensics and Security, 2025, pp. 1-1


Link
Cite

Cite

APA   Click to copy
Jiang, Y., Yuan, X., Zhang, W., Ke, W., Lam, C.-T., & Im, S.-K. (2025). PFL-ALP: Personalized Federated Learning Against Backdoor Attacks via Attention-based Local Purification. IEEE Transactions on Information Forensics and Security, 1–1. https://doi.org/10.1109/TIFS.2025.3639936


Chicago/Turabian   Click to copy
Jiang, Yifeng, Xiaochen Yuan, Weiwen Zhang, Wei Ke, Chan-Tong Lam, and Sio-Kei Im. “PFL-ALP: Personalized Federated Learning Against Backdoor Attacks via Attention-Based Local Purification.” IEEE Transactions on Information Forensics and Security (2025): 1–1.


MLA   Click to copy
Jiang, Yifeng, et al. “PFL-ALP: Personalized Federated Learning Against Backdoor Attacks via Attention-Based Local Purification.” IEEE Transactions on Information Forensics and Security, 2025, pp. 1–1, doi:10.1109/TIFS.2025.3639936.


BibTeX   Click to copy

@article{jiang2025a,
  title = {PFL-ALP: Personalized Federated Learning Against Backdoor Attacks via Attention-based Local Purification},
  year = {2025},
  journal = {IEEE Transactions on Information Forensics and Security},
  pages = {1-1},
  doi = {10.1109/TIFS.2025.3639936},
  author = {Jiang, Yifeng and Yuan, Xiaochen and Zhang, Weiwen and Ke, Wei and Lam, Chan-Tong and Im, Sio-Kei}
}

Overview of proposed PFL-ALP
Abstract: Federated learning (FL) enables collaborative model training with local data privacy preserving, but is vulnerable to backdoor attacks from malicious clients. These attacks can manipulate the global model to produce malicious output when encountering specific triggers. Existing defenses, categorized as server-side and client-side approaches, have limitations such as reliance on auxiliary data availability, susceptibility to inference attacks, and instability under non-independent and identically distributed (Non-IID) data. In response to these challenges, we propose a Personalized Federated Learning via Attention-based Local Purification (PFL-ALP) algorithm, a hybrid defense mechanism integrating server-side dynamic clustering and client-side purification enhanced with personalized model knowledge. This approach effectively mitigates bias introduced by Non-IID data on the server side and further purifies the backdoored model on the client side. Specifically, we employ neural attention distillation (NAD) for model purification and enhance it with personalized model knowledge, extending the effectiveness of NAD in Non-IID FL settings. This design makes PFL-ALP compatible with privacy protocols to mitigate inference attacks. Moreover, we establish a convergence guarantee for PFL-ALP and experimentally validate its superior performance in defending against various backdoor attacks compared to multiple state-of-the-art (SOTA) defenses across three datasets. The results show that even with malicious rates ranging from 30% to 90%, PFL-ALP can reduce the attack success rate by more than 69.4 percentage points, with the reduction in main task accuracy less than 12.4 percentage points.


Tools
Translate to