Titre
SleepTransformer: Automatic Sleep Staging With Interpretability and Uncertainty Quantification.
Type
article
Institution
Externe
Périodique
Auteur(s)
Phan, H.
Auteure/Auteur
Mikkelsen, K.
Auteure/Auteur
Chén, Oliver Y
Auteure/Auteur
Koch, P.
Auteure/Auteur
Mertins, A.
Auteure/Auteur
De Vos, M.
Auteure/Auteur
Liens vers les personnes
ISSN
1558-2531
Statut éditorial
Publié
Date de publication
2022-08
Volume
69
Numéro
8
Première page
2456
Dernière page/numéro d’article
2467
Peer-reviewed
Oui
Langue
anglais
Notes
Publication types: Journal Article ; Research Support, Non-U.S. Gov't
Publication Status: ppublish
Publication Status: ppublish
Résumé
Black-box skepticism is one of the main hindrances impeding deep-learning-based automatic sleep scoring from being used in clinical environments.
Towards interpretability, this work proposes a sequence-to-sequence sleep-staging model, namely SleepTransformer. It is based on the transformer backbone and offers interpretability of the model's decisions at both the epoch and sequence level. We further propose a simple yet efficient method to quantify uncertainty in the model's decisions. The method, which is based on entropy, can serve as a metric for deferring low-confidence epochs to a human expert for further inspection.
Making sense of the transformer's self-attention scores for interpretability, at the epoch level, the attention scores are encoded as a heat map to highlight sleep-relevant features captured from the input EEG signal. At the sequence level, the attention scores are visualized as the influence of different neighboring epochs in an input sequence (i.e. the context) to recognition of a target epoch, mimicking the way manual scoring is done by human experts.
Additionally, we demonstrate that SleepTransformer performs on par with existing methods on two databases of different sizes.
Equipped with interpretability and the ability of uncertainty quantification, SleepTransformer holds promise for being integrated into clinical settings.
Towards interpretability, this work proposes a sequence-to-sequence sleep-staging model, namely SleepTransformer. It is based on the transformer backbone and offers interpretability of the model's decisions at both the epoch and sequence level. We further propose a simple yet efficient method to quantify uncertainty in the model's decisions. The method, which is based on entropy, can serve as a metric for deferring low-confidence epochs to a human expert for further inspection.
Making sense of the transformer's self-attention scores for interpretability, at the epoch level, the attention scores are encoded as a heat map to highlight sleep-relevant features captured from the input EEG signal. At the sequence level, the attention scores are visualized as the influence of different neighboring epochs in an input sequence (i.e. the context) to recognition of a target epoch, mimicking the way manual scoring is done by human experts.
Additionally, we demonstrate that SleepTransformer performs on par with existing methods on two databases of different sizes.
Equipped with interpretability and the ability of uncertainty quantification, SleepTransformer holds promise for being integrated into clinical settings.
PID Serval
serval:BIB_899B6B36D6A0
PMID
Date de création
2024-01-11T17:05:34.549Z
Date de création dans IRIS
2025-05-20T21:00:07Z
Fichier(s)![Vignette d'image]()
En cours de chargement...
Nom
SleepTransformer.pdf
Version du manuscrit
published
Taille
5.08 MB
Format
Adobe PDF
PID Serval
serval:BIB_899B6B36D6A0.P001
Somme de contrôle
(MD5):2574147c41a50fef082911f83b074f5a