Abstracts

Multi-Class Seizure Type Classification Using Scalp EEG Data and Machine Learning

Abstract number : 3.114
Submission category : 2. Translational Research / 2E. Other
Year : 2019
Submission ID : 2422348
Source : www.aesnet.org
Presentation date : 5/9/2019 1:55:12 PM
Published date : Dec 4, 2019, 18:00 PM

Authors :
Subhrajit Roy, IBM Research; Christian Meisel, Boston Children's Hospital; Umar Asif, IBM Research; Jianbin Tang, IBM Research; Stefan Harrer, IBM Research

Rationale: Reliable assessments of seizure frequency and seizure type are quintessential for epilepsy diagnosis, syndrome evaluation, treatment selection, and prognosis. Continuous EEG is the gold standard for detecting epileptic seizures in long-term epilepsy monitoring and intensive care units. Along with history and video, EEG is used to determine seizure types, and this information ultimately forms the basis for treatment decisions. However, continuous video EEG review requires specifically trained epileptologists and technologists, is time consuming, and only of limited availability in many settings. By providing a reliable report of seizure occurrence and type, automatic seizure detection and classification methods may help overcome these limitations, help reduce the time needed to review and classify epileptic seizures and provide important clinical decision support. Methods: Data from the TUH EEG Seizure Corpus [1] contains seizure type-annotated scalpelectroencephalography (EEG) recording sessions of 314 patients collected in the standard 10/20 system at 250 Hz amounting to a total of 2012 seizures with each seizure being labelled as one of the following types: focal non-specific, generalized non-specific, simple partial, complex partial, absence, tonic, and tonic-clonic. The dataset comprises average and left-ear referenced EEG sessions. First, we transformed the recorded EEG signal by using a Temporal Central Parasagittal montage. Next, we applied a Fast Fourier Transform to each ???? seconds of clips (with ???? seconds overlap) across all montages. Then we fed the logarithm of the magnitudes of frequencies (range 1-????max( Hz) into various machine learning algorithms for classification (Figure) [2]. We also developed a novel variant of Convolutional Neural Networks (CNN) named SeizureNet which learns features at different spatial and temporal resolutions [3]. Following a 5-fold cross-validation strategy for each split (fold) seizures for each type were divided into training (60%), validation (20%), and test (20%) sets. We chose the hyper-parameters for pre-processing and machine learning algorithms based on model performance on the validation set. Some seizure classes were over-represented, and some were under-represented, hence during training we under-sampled the majority classes by randomly picking training samples without replacement. During evaluation, each seizure is classified into a type by taking a vote of the predictions made by each sample generated from it. Results: We used weighted-f1 scores to compare model performances on the test set which was kept untouched during hyper-parameter selection. The weighted-f1 scores, averaged over 5 runs, obtained by k-NN, SGD classifier, XGBoost, Adaboost, CNN, and SeizureNet are 0.884, 0.649, 0.782, 0.509, 0.723, and 0.90 (Figure). Conclusions These results demonstrate that reliable, multi-class, machine learning-based seizure type classification using scalp EEG data is possible. The superior Deep Learning model SeizureNet utilized EEG data only. Future work will explore advanced model architectures for multi-class seizure type detection and incorporate additional clinical and electrophysiological biomarkers.References.1. V. Shah, et al., 'The Temple University Hospital Seizure Detection Corpus', Front. in Neuroinf. 12, 83 (2018).2. S. Roy et al., 'Machine learning for seizure type classification: setting the benchmark', arXiv (2019).3. U. Asif et al., 'SeizureNet: A deep convolutional neural network for accurate seizure type classification andseizure detection', arXiv (2019). Funding: No funding was received in support of this abstract.
Translational Research