AI versus Expert Classification of Electrographic Seizures and Seizure Onset Times
Abstract number :
1.18
Submission category :
3. Neurophysiology / 3G. Computational Analysis & Modeling of EEG
Year :
2023
Submission ID :
248
Source :
www.aesnet.org
Presentation date :
12/2/2023 12:00:00 AM
Published date :
Authors :
Presenting Author: Sharanya Arcot Desai, PhD – NeuroPace, Inc.
Muhammed Afzal, PhD – NeuroPace, Inc.; Wade Barry, BA – NeuroPace, Inc.; Jonathan Kuo, MD – Keck School of Medicine of USC; Shawna Benard, MD – Keck School of Medicine of USC; Christopher Traner, MD, MHS-Med Ed – Yale School of Medicine; Thomas Tcheng, PhD – NeuroPace, Inc.; Cairn Seale, MS – NeuroPace, Inc.; Martha Morrell, MD – NeuroPace, Inc.; Stanford University
Rationale:
Hundreds of 90-second iEEG records are typically captured from each NeuroPace RNS System patient between clinic visits. While these records provide invaluable information about the patient’s electrographic seizure and interictal activity patterns, manually classifying them into electrographic seizure/non-seizure activity, and manually identifying the seizure onset channels and times is an extremely time-consuming process. A convolutional neural network based Electrographic Seizure Classifier (ESC) model was developed by training a ResNet deep learning model on ~180,000 iEEG channels manually labeled as electrographic seizure/non-seizure. The trained model had a classification accuracy of 95.7% on 20% of the held-out dataset not used for training (Barry et al., 2021).
Methods:
The ESC model is further tested against iEEG annotations provided by 3 expert reviewers board certified in epilepsy. The three experts individually annotated 3,874 iEEG channels from 36, 29, and 35 patients with leads in the mesiotemporal (MTL), neocortical (NEO), and MTL+NEO regions respectively. The ESC model’s seizure/non-seizure classification scores agreed with the three reviewers at 88.7%, 89.6%, and 84.3%, which was similar to reviewer agreement with each other (92.9% - 86.4%). On iEEG channels with all three experts in agreement (83.2%), the ESC model had an agreement score of 93.2%. Additionally, the ESC model’s classification certainty scores reflected combined reviewer certainty scores. When 0, 1, 2 and 3 (out of 3) reviewers annotated iEEG channels as electrographic seizures, the ESC model’s seizure certainty scores were in the range: [0.12-0.19], [0.32-0.42], [0.61-0.70], and [0.92-0.95], respectively (Figure 1).
Results:
The ESC model was used as a starting-point model for training a second Seizure Onset Detection (SOD) model. For this task, seizure onset times were manually annotated on a relatively small number of iEEG channels (4,859 from 50 patients). Experiments showed that fine-tuning the ESC models with augmented data (30,768 iEEG channels) resulted in a better validation performance (on 20% of the manually annotated data) compared to training with only the original data (3.1 versus 4.4 seconds media absolute error). Similarly, using the ESC model weights as the starting point for fine-tuning instead of other model weight initialization methods provided a statistically significant advantage in the SOD model validation performance (3.1 versus 4.7 and 3.5 MEA). Finally, on iEEG channels in which three expert annotations of seizure onset times were within 1.5 seconds, the SOD model’s seizure onset time prediction was within 1.7 seconds of expert annotation (Figure 2).
Conclusions:
In summary, this work aims to clinically validate an Electrographic Seizure Classifier (ESC) model and train a second Seizure Onset Detection (SOD) model using a relatively small, labeled dataset by leveraging features learned by the first model. Clinical tools developed with these models could significantly improve epilepsy clinical workflows, particularly for iEEG data review, which is predominantly performed manually today.
Funding: N/A.
Neurophysiology