TY - JOUR
T1 - Detection of COVID-19 from voice, cough and breathing patterns
T2 - Dataset and preliminary results
AU - Despotovic, Vladimir
AU - Ismael, Muhannad
AU - Cornil, Maël
AU - Call, Roderick Mc
AU - Fagherazzi, Guy
N1 - Funding Information:
(Muhannad Ismael, Mael Cornil, Roderick Mc Call) and Vladimir Despotovic are supported by Luxembourg Institute of Science and Technology (LIST) and University of Luxembourg (UL) respectively, as well as by Luxembourg National Research Fund (FNR) (CDCVA, Grant Number 14856868 ). Guy Fagherazzi is supported by the Luxembourg Institute of Health , FNR (Predi-COVID, Grant Number 14716273 ) and the Andre Losch Fondation. The authors declare no competing interests.
Publisher Copyright:
© 2021
PY - 2021/11
Y1 - 2021/11
N2 - COVID-19 heavily affects breathing and voice and causes symptoms that make patients’ voices distinctive, creating recognizable audio signatures. Initial studies have already suggested the potential of using voice as a screening solution. In this article we present a dataset of voice, cough and breathing audio recordings collected from individuals infected by SARS-CoV-2 virus, as well as non-infected subjects via large scale crowdsourced campaign. We describe preliminary results for detection of COVID-19 from cough patterns using standard acoustic features sets, wavelet scattering features and deep audio embeddings extracted from low-level feature representations (VGGish and OpenL3). Our models achieve accuracy of 88.52%, sensitivity of 88.75% and specificity of 90.87%, confirming the applicability of audio signatures to identify COVID-19 symptoms. We furthermore provide an in-depth analysis of the most informative acoustic features and try to elucidate the mechanisms that alter the acoustic characteristics of coughs of people with COVID-19.
AB - COVID-19 heavily affects breathing and voice and causes symptoms that make patients’ voices distinctive, creating recognizable audio signatures. Initial studies have already suggested the potential of using voice as a screening solution. In this article we present a dataset of voice, cough and breathing audio recordings collected from individuals infected by SARS-CoV-2 virus, as well as non-infected subjects via large scale crowdsourced campaign. We describe preliminary results for detection of COVID-19 from cough patterns using standard acoustic features sets, wavelet scattering features and deep audio embeddings extracted from low-level feature representations (VGGish and OpenL3). Our models achieve accuracy of 88.52%, sensitivity of 88.75% and specificity of 90.87%, confirming the applicability of audio signatures to identify COVID-19 symptoms. We furthermore provide an in-depth analysis of the most informative acoustic features and try to elucidate the mechanisms that alter the acoustic characteristics of coughs of people with COVID-19.
KW - Artificial intelligence
KW - Cough
KW - COVID-19
KW - Digital biomarker
KW - Voice
UR - http://www.scopus.com/inward/record.url?scp=85117116550&partnerID=8YFLogxK
UR - https://www.ncbi.nlm.nih.gov/pubmed/34656870
U2 - 10.1016/j.compbiomed.2021.104944
DO - 10.1016/j.compbiomed.2021.104944
M3 - Article
C2 - 34656870
AN - SCOPUS:85117116550
SN - 0010-4825
VL - 138
SP - 104944
JO - Computers in Biology and Medicine
JF - Computers in Biology and Medicine
M1 - 104944
ER -