Adversarial Robustness in Multi-Task Learning: Promises and Illusions

Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Citations (Scopus)

Abstract

Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks. While most of the studies focus on single-task neural networks with computer vision datasets, very little research has considered complex multi-task models that are common in real applications. In this paper, we evaluate the design choices that impact the robustness of multi-task deep learning networks. We provide evidence that blindly adding auxiliary tasks, or weighing the tasks provides a false sense of robustness. Thereby, we tone down the claim made by previous research and study the different factors which may affect robustness. In particular, we show that the choice of the task to incorporate in the loss function are important factors that can be leveraged to yield more robust models. We provide the appendix, all our algorithms, models, and open source-code at https://github.com/yamizi/taskaugment.

Original languageEnglish
Title of host publicationAAAI-22 Technical Tracks 1
PublisherAssociation for the Advancement of Artificial Intelligence
Pages697-705
Number of pages9
ISBN (Electronic)1577358767, 9781577358763
DOIs
Publication statusPublished - 30 Jun 2022
Externally publishedYes
Event36th AAAI Conference on Artificial Intelligence, AAAI 2022 - Virtual, Online
Duration: 22 Feb 20221 Mar 2022

Publication series

NameProceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
Volume36

Conference

Conference36th AAAI Conference on Artificial Intelligence, AAAI 2022
CityVirtual, Online
Period22/02/221/03/22

Fingerprint

Dive into the research topics of 'Adversarial Robustness in Multi-Task Learning: Promises and Illusions'. Together they form a unique fingerprint.

Cite this