Pre-Trained Wavelet Transformer for Seizure Detection Using Scalp EEG
Abstract number :
1.192
Submission category :
3. Neurophysiology / 3G. Computational Analysis & Modeling of EEG
Year :
2021
Submission ID :
1826270
Source :
www.aesnet.org
Presentation date :
12/4/2021 12:00:00 PM
Published date :
Nov 22, 2021, 06:53 AM
Authors :
Yi Cheng Zhu, Undergraduate - University of Toronto; Qi Xu - University of Manitoba
Rationale: An epileptic seizure is due to abnormal excessive or synchronous neuronal activity in the brain. The monitoring of these brain signals is commonly done using the International 10-20 system scalp Electroencephalogram (EEG). These signals are complex, noisy, non-linear, non-stationary and produce a high volume of data. Hence, the detection of seizures is a challenging task. Deep machine learning has been shown to achieve outstanding performance on processing high volume complex non-linear data in other fields and have recently been applied to EEG seizure detection.
One significant limitation of the application of machine learning in medical field is the lack of high-quality data. Seizure labeling must be done by a trained neurophysiologist making labeled datasets scarce and expensive to create compared to unlabeled datasets. Previously, machine learning models have been limited by their architecture to only be able to utilize the labeled datasets.
We propose a novel machine learning architecture which can be pre-trained on a large unlabeled dataset as an autoencoder model. We can then modify it to a classification model that can be fine tuned on the smaller expert labeled dataset performing knowledge transfer between the two models.
Methods: We introduced a novel transformer based neural network architecture for seizure detection. Automatic seizure detection is a task designed to help expedite the EEG review process from neurologists. Our model can be pretrained on a large, unlabeled dataset before being fine-tuned on a small, labeled dataset. We pretrain our model on the TUH EEG Corpus, containing over 16,986 EEG sessions from 10,874 unique subjects. We then fine tune on the TUH EEG Seizure Corpus, a subset of the TUH EEG Corpus with sessions 1185 from 592 subjects that have been labeled with seizures by experts.
We preprocess the raw, standard 10-20 system 21 channels EEG data by normalizing the per channel data to a Z-score. Our model consists of a learnable wavelet transformer layer, a 1D convolution layer to down sample the data, a 32-layer 16 head transformer encoder and a transformer decoder. The model is trained to perform the masked reconstruction task where part of the EEG data is omitted from the input to the model and the model is challenged to reconstruct the omitted data.
In the fine-tuning step, the transformer decoder is replaced with 2 fully connected linear layers. An activation function of a Rectified Linear Unit is used before the fully connected layers and a SoftMax activation is applied to the final output.
Results: We evaluated the model’s performance on the dev-test subset of the TUH EEG Seizure Corpus. We used the Time-Aligned Event score (TAES), the score used in the Neureka 2020 Epilepsy Challenge that is based on this dataset. The pilot model achieved 1.26% sensitivity and a False Alarm Rate of 857.01 per 24 hours.
Conclusions: Our model has shown promising initial results on the TUH EEG dataset. Further training with a larger dataset is expected to yield improvements upon these results. The adaptability of the pretrained model allows it to be easily fine tuned to work on other EEG tasks, such as interictal epileptiform detection.
Funding: Please list any funding that was received in support of this abstract.: No Funding Received.
Neurophysiology