An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning

Mary Roszel*, Robert Norvill, Radu State

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

Federated learning is a distributed setting where multiple participants jointly train a machine learning model without exchanging data. Recent work has found that federated learning is vulnerable to backdoor model poisoning attacks, where an attacker leverages the unique environment to submit malicious model updates. To address these malicious participants, several Byzantine-Tolerant aggregation methods have been applied to the federated learning setting, including Krum, Multi-Krum, RFA, and Norm-Difference Clipping. In this work, we analyze the effectiveness and limits of each aggregation method and provide a thorough analysis of their success in various fixed-frequency attack settings. Further, we analyze the fairness of such aggregation methods on the success of the model on its intended tasks. Our results indicate that only one defense can successfully mitigate attacks in all attack scenarios, but a significant fairness issue is observed, highlighting the issues with preventing malicious attacks in a federated setting.

Original languageEnglish
Title of host publicationModeling Decisions for Artificial Intelligence - 19th International Conference, MDAI 2022, Proceedings
EditorsVicenç Torra, Yasuo Narukawa
PublisherSpringer Science and Business Media Deutschland GmbH
Pages143-155
Number of pages13
ISBN (Print)9783031134470
DOIs
Publication statusPublished - 2022
Externally publishedYes
Event19th International Conference on Modeling Decisions for Artificial Intelligence, MDAI 2022 - Sant Cugat, Spain
Duration: 30 Aug 20222 Sept 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13408 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th International Conference on Modeling Decisions for Artificial Intelligence, MDAI 2022
Country/TerritorySpain
CitySant Cugat
Period30/08/222/09/22

Fingerprint

Dive into the research topics of 'An Analysis of Byzantine-Tolerant Aggregation Mechanisms on Model Poisoning in Federated Learning'. Together they form a unique fingerprint.

Cite this