GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

Salah Ghamizi, Jingfeng Zhang*, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

2 Citations (Scopus)

Abstract

While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models. To mitigate the costs, we propose Guided Adversarial Training (GAT), a novel adversarial training technique that exploits auxiliary tasks under a limited set of training data. Our approach extends single-task models into multi-task models during the min-max optimization of adversarial training, and drives the loss optimization with a regularization of the gradient curvature across multiple tasks. Experimentally, GAT increases the robust AUC of CheXpert medical imaging dataset from 50% to 83% and On CIFAR-10, GAT outperforms eight state-of-the-art adversarial training and achieves 56.21% robust accuracy with Resnet-50. Overall, we demonstrate that guided multi-task learning is an actionable and promising avenue to push further the boundaries of model robustness.

Original languageEnglish
Pages (from-to)11255-11282
Number of pages28
JournalProceedings of Machine Learning Research
Volume202
Publication statusPublished - 2023
Externally publishedYes
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

Fingerprint

Dive into the research topics of 'GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks'. Together they form a unique fingerprint.

Cite this