TY - GEN
T1 - On the Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks
AU - Dyrmishi, Salijona
AU - Ghamizi, Salah
AU - Simonetto, Thibault
AU - Traon, Yves Le
AU - Cordy, Maxime
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - While the literature on security attacks and defenses of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems. Our paper paves the way for a better understanding of adversarial robustness against realistic attacks and makes two major contributions. First, we conduct a study on three real-world use cases (text classification, botnet detection, malware detection) and seven datasets in order to evaluate whether unrealistic adversarial examples can be used to protect models against realistic examples. Our results reveal discrepancies across the use cases, where unrealistic examples can either be as effective as the realistic ones or may offer only limited improvement. Second, to explain these results, we analyze the latent representation of the adversarial examples generated with realistic and unrealistic attacks. We shed light on the patterns that discriminate which unrealistic examples can be used for effective hardening. We release our code, datasets and models to support future research in exploring how to reduce the gap between unrealistic and realistic adversarial attacks.
AB - While the literature on security attacks and defenses of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems. Our paper paves the way for a better understanding of adversarial robustness against realistic attacks and makes two major contributions. First, we conduct a study on three real-world use cases (text classification, botnet detection, malware detection) and seven datasets in order to evaluate whether unrealistic adversarial examples can be used to protect models against realistic examples. Our results reveal discrepancies across the use cases, where unrealistic examples can either be as effective as the realistic ones or may offer only limited improvement. Second, to explain these results, we analyze the latent representation of the adversarial examples generated with realistic and unrealistic attacks. We shed light on the patterns that discriminate which unrealistic examples can be used for effective hardening. We release our code, datasets and models to support future research in exploring how to reduce the gap between unrealistic and realistic adversarial attacks.
KW - adversarial attacks
KW - constrained feature space
KW - hardening
KW - problem space
UR - http://www.scopus.com/inward/record.url?scp=85160328956&partnerID=8YFLogxK
U2 - 10.1109/SP46215.2023.10179316
DO - 10.1109/SP46215.2023.10179316
M3 - Conference contribution
AN - SCOPUS:85160328956
T3 - Proceedings - IEEE Symposium on Security and Privacy
SP - 1384
EP - 1400
BT - Proceedings - 44th IEEE Symposium on Security and Privacy, SP 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 44th IEEE Symposium on Security and Privacy, SP 2023
Y2 - 22 May 2023 through 25 May 2023
ER -