A Comprehensive Review of Interpretable Deep Learning Defences via: Secure Federated Learning Frameworks: Security Models, Optimization Techniques, and Emerging Computing Applications
Main Article Content
Abstract
The rapid evolution of deep learning systems has introduced unprecedented capabilities in intelligent decision-making across domains such as healthcare, finance, and smart cities. However, the opaque nature of deep neural networks and their susceptibility to adversarial attacks pose significant challenges in terms of trust, security, and interpretability. Federated Learning (FL) has emerged as a decentralized paradigm that enables collaborative model training while preserving data privacy, thereby addressing critical concerns related to centralized data exposure. This paper presents a comprehensive review of interpretable deep learning defences within secure federated learning frameworks, focusing on security models, optimization techniques, and emerging computing applications. The study explores key threats such as model poisoning, inference attacks, and adversarial manipulation, alongside defence mechanisms including differential privacy, secure aggregation, and explainable AI (XAI) integration. Furthermore, the role of interpretability techniques such as SHAP, LIME, and attention mechanisms is analysed in enhancing transparency and trustworthiness in FL systems. The review highlights optimization challenges related to communication efficiency, heterogeneity, and scalability. Finally, future research directions emphasize the need for robust, interpretable, and resource-efficient federated systems for real-world deployment.
Article Details

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.