Combining CNN-based histologic whole slide image analysis and patient data to improve skin cancer classification

Julia Höhn, Eva Krieghoff-Henning, Tanja B. Jutzi, Christof von Kalle, Jochen S. Utikal, Friedegund Meier, Frank F. Gellrich, Sarah Hobelsberger, Axel Hauschild, Justin G. Schlager, Lars French, Lucie Heinzerling, Max Schlaak, Kamran Ghoreschi, Franz J. Hilke, Gabriela Poch, Heinz Kutzner, Markus V. Heppt, Sebastian Haferkamp, Wiebke SondermannDirk Schadendorf, Bastian Schilling, Matthias Goebeler, Achim Hekler, Stefan Fröhling, Daniel B. Lipka, Jakob N. Kather, Dieter Krahl, Gerardo Ferrara, Sarah Haggenmüller, Titus J. Brinker*

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

73 Citations (Scopus)

Abstract

Background: Clinicians and pathologists traditionally use patient data in addition to clinical examination to support their diagnoses. Objectives: We investigated whether a combination of histologic whole slides image (WSI) analysis based on convolutional neural networks (CNNs) and commonly available patient data (age, sex and anatomical site of the lesion) in a binary melanoma/nevus classification task could increase the performance compared with CNNs alone. Methods: We used 431 WSIs from two different laboratories and analysed the performance of classifiers that used the image or patient data individually or three common fusion techniques. Furthermore, we tested a naive combination of patient data and an image classifier: for cases interpreted as ‘uncertain’ (CNN output score <0.7), the decision of the CNN was replaced by the decision of the patient data classifier. Results: The CNN on its own achieved the best performance (mean ± standard deviation of five individual runs) with AUROC of 92.30% ± 0.23% and balanced accuracy of 83.17% ± 0.38%. While the classification performance was not significantly improved in general by any of the tested fusions, naive strategy of replacing the image classifier with the patient data classifier on slides with low output scores improved balanced accuracy to 86.72% ± 0.36%. Conclusion: In most cases, the CNN on its own was so accurate that patient data integration did not provide any benefit. However, incorporating patient data for lesions that were classified by the CNN with low ‘confidence’ improved balanced accuracy.

Original languageEnglish
Pages (from-to)94-101
Number of pages8
JournalEuropean Journal of Cancer
Volume149
DOIs
Publication statusPublished - May 2021
Externally publishedYes

Keywords

  • Convolutional neural networks
  • Data fusion
  • Histologic whole slide images
  • Patient data
  • Skin cancer classification

Fingerprint

Dive into the research topics of 'Combining CNN-based histologic whole slide image analysis and patient data to improve skin cancer classification'. Together they form a unique fingerprint.

Cite this