TY - GEN
T1 - An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning
AU - Roszel, Mary
AU - Norvill, Robert
AU - State, Radu
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Federated learning is a distributed setting where multiple participants jointly train a machine learning model without exchanging data. Recent work has found that federated learning is vulnerable to backdoor model poisoning attacks, where an attacker leverages the unique environment to submit malicious model updates. To address these malicious participants, several Byzantine-Tolerant aggregation methods have been applied to the federated learning setting, including Krum, Multi-Krum, RFA, and Norm-Difference Clipping. In this work, we analyze the effectiveness and limits of each aggregation method and provide a thorough analysis of their success in various fixed-frequency attack settings. Further, we analyze the fairness of such aggregation methods on the success of the model on its intended tasks. Our results indicate that only one defense can successfully mitigate attacks in all attack scenarios, but a significant fairness issue is observed, highlighting the issues with preventing malicious attacks in a federated setting.
AB - Federated learning is a distributed setting where multiple participants jointly train a machine learning model without exchanging data. Recent work has found that federated learning is vulnerable to backdoor model poisoning attacks, where an attacker leverages the unique environment to submit malicious model updates. To address these malicious participants, several Byzantine-Tolerant aggregation methods have been applied to the federated learning setting, including Krum, Multi-Krum, RFA, and Norm-Difference Clipping. In this work, we analyze the effectiveness and limits of each aggregation method and provide a thorough analysis of their success in various fixed-frequency attack settings. Further, we analyze the fairness of such aggregation methods on the success of the model on its intended tasks. Our results indicate that only one defense can successfully mitigate attacks in all attack scenarios, but a significant fairness issue is observed, highlighting the issues with preventing malicious attacks in a federated setting.
UR - http://www.scopus.com/inward/record.url?scp=85137108266&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-13448-7_12
DO - 10.1007/978-3-031-13448-7_12
M3 - Conference contribution
AN - SCOPUS:85137108266
SN - 9783031134470
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 143
EP - 155
BT - Modeling Decisions for Artificial Intelligence - 19th International Conference, MDAI 2022, Proceedings
A2 - Torra, Vicenç
A2 - Narukawa, Yasuo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 19th International Conference on Modeling Decisions for Artificial Intelligence, MDAI 2022
Y2 - 30 August 2022 through 2 September 2022
ER -