Abstracts

Using Fully Connected Layer to Extract Interpretable Component for Electroencephalogram Classification: A Clinical Application of Convolutional Neural Network

Abstract number : 2.061
Submission category : 3. Neurophysiology / 3G. Computational Analysis & Modeling of EEG
Year : 2022
Submission ID : 2204282
Source : www.aesnet.org
Presentation date : 12/4/2022 12:00:00 PM
Published date : Nov 22, 2022, 05:24 AM

Authors :
Qi Huang, MD – Zhongshan hospital of Fudan univeristy; Jing Ding, M.D. – Neurology – Zhongshan hospital of Fudan university; Xin Wang, M.D. – Zhongshan hospital of Fudan university

Rationale: Convolutional neural network (CNN) has been widely employed for automatic electroencephalogram (EEG) classification, but interpretation of this architecture remains obscure. Given that intelligibility may provide opportunities for underlying physiological or pathophysiological knowledge discovery, interpretable methods seems quite important in CNN design. Previous approaches for CNN interpretation show that weights in first few layers are interpretable, but the interpretability of weights in deeper fully connected layer (FCL) remains unclear. Here we introduced a novel lightweight CNN which extracted interpretable EEG component using FCL.

Methods: We designed a novel FCL block which was a part of CNN architecture for feature extraction. To guarantee that the CNN will learn task-related frequencies rather than noise, we used Bonn dataset and choose 2 tasks (task 1: open eyes vs. closed eyes, task 2: interictal stage vs. ictal stage) with well-known neurophysiological characteristics for evaluation. EEG signals were split into short segments containing 173 sample points each without overlapping. We used a ten-fold cross validation to train and test CNN and compared the overall performance among groups. In addition, we took a number of attempts to ease FCL interpretation, which included increasing convolutional kernel (CK) size, using more advanced filters, and carefully initializing the network to cope with noise. The between-group differences were compared using ANOVA or Kruskal-Wallis test. The interpretable abilities between CK and FCL blocks were analyzed after inspecting their spectral features.

Results: The overall accuracy of using this proposed CNN model was 75.3% and 85.1% for task 1 and task 2 respectively. Spectral analysis showed that task-related features were enhanced in FCL. Specifically, we showed that the enhanced spectral features of FCL was in alpha band in task 1, which was largely in agreement with the physiological reactivity of posterior rhythm when eyes closed. Analogously, FCL intuitively presented the changes in theta activities when CNN was employed to recognize the ictal stage. Results also showed that longer kernels had better performance in EEG classification. Compared to more advanced kernels, standard kernel achieved the best performances and made FCL more interpretable. At last, we noticed that there were some low-quality segments in the dataset, which might reduce the recognition performance of CNN. Some of the misclassification might be explained after inspecting the raw EEG data. To take into account that evaluating the classification based on per EEG segment might spoil the overall evaluation procedure, we re-evaluated the performance on a sample-level. In this case, we achieved an overall accuracy of 95.5% in task 1, and an overall accuracy of 92.0% in task 2.

Conclusions: The proposed CNN is unique in extracting EEG spectral features using FCL and may extend the clinical application of AI-assisted EEG classification and interpretation.

Funding: This work was supported by project grants from the Science and Technology Commission of Shanghai Municipality (Grant No.201409002000).
Neurophysiology