A benchmark for neural network robustness in skin cancer classification

Roman C. Maron, Justin G. Schlager, Sarah Haggenmüller, Christof von Kalle, Jochen S. Utikal, Friedegund Meier, Frank F. Gellrich, Sarah Hobelsberger, Axel Hauschild, Lars French, Lucie Heinzerling, Max Schlaak, Kamran Ghoreschi, Franz J. Hilke, Gabriela Poch, Markus V. Heppt, Carola Berking, Sebastian Haferkamp, Wiebke Sondermann, Dirk SchadendorfBastian Schilling, Matthias Goebeler, Eva Krieghoff-Henning, Achim Hekler, Stefan Fröhling, Daniel B. Lipka, Jakob N. Kather, Titus J. Brinker*

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

54 Citations (Scopus)

Abstract

Background: One prominent application for deep learning–based classifiers is skin cancer classification on dermoscopic images. However, classifier evaluation is often limited to holdout data which can mask common shortcomings such as susceptibility to confounding factors. To increase clinical applicability, it is necessary to thoroughly evaluate such classifiers on out-of-distribution (OOD) data. Objective: The objective of the study was to establish a dermoscopic skin cancer benchmark in which classifier robustness to OOD data can be measured. Methods: Using a proprietary dermoscopic image database and a set of image transformations, we create an OOD robustness benchmark and evaluate the robustness of four different convolutional neural network (CNN) architectures on it. Results: The benchmark contains three data sets—Skin Archive Munich (SAM), SAM-corrupted (SAM-C) and SAM-perturbed (SAM-P)—and is publicly available for download. To maintain the benchmark's OOD status, ground truth labels are not provided and test results should be sent to us for assessment. The SAM data set contains 319 unmodified and biopsy-verified dermoscopic melanoma (n = 194) and nevus (n = 125) images. SAM-C and SAM-P contain images from SAM which were artificially modified to test a classifier against low-quality inputs and to measure its prediction stability over small image changes, respectively. All four CNNs showed susceptibility to corruptions and perturbations. Conclusions: This benchmark provides three data sets which allow for OOD testing of binary skin cancer classifiers. Our classifier performance confirms the shortcomings of CNNs and provides a frame of reference. Altogether, this benchmark should facilitate a more thorough evaluation process and thereby enable the development of more robust skin cancer classifiers.

Original languageEnglish
Pages (from-to)191-199
Number of pages9
JournalEuropean Journal of Cancer
Volume155
DOIs
Publication statusPublished - Sept 2021
Externally publishedYes

Keywords

  • Artificial intelligence
  • Benchmarking
  • Deep learning
  • Dermatology
  • Melanoma
  • Nevus
  • Skin neoplasms

Fingerprint

Dive into the research topics of 'A benchmark for neural network robustness in skin cancer classification'. Together they form a unique fingerprint.

Cite this