Abstracts

Crowdsourcing Seizure Detection Algorithms Using Kaggle and ieeg.org

Abstract number : 3.148
Submission category : 3. Neurophysiology
Year : 2015
Submission ID : 2327993
Source : www.aesnet.org
Presentation date : 12/7/2015 12:00:00 AM
Published date : Nov 13, 2015, 12:43 PM

Authors :
T. Blevins, A. Khambhati, J. Wagenaar, B. Brinkmann, G. Worrell, B. Litt

Rationale: Responsive neurostimulation can modulate seizure activity, improving control of clinical events. Existing seizure detection algorithms in commercial responsive neurostimulation devices exhibit modest specificity, resulting in frequent false positive stimulations. In an attempt to improve algorithm performance, we hosted a crowdsourcing study for seizure detection algorithms co-sponsored by the NIH and American Epilepsy Society. Using the data mining website, Kaggle, we posted a training set composed of intracranial human and canine recordings and tested algorithms on a limited validation data set. In the present study we validate the three best performing algorithms from the UPenn and Mayo Clinic’s Seizure Detection Challenge, on a more extensive archive of prolonged, open-source, multi-institutional data sets hosted on the International Electrophysiology Portal (http://www.ieeg.org).Methods: A new data set of human ECoG recordings from UPenn and Mayo Clinic was selected and divided into a training set and a testing set. The training set comprised of randomly-sampled 1-second interictal clips and sequentially-sampled 1-second ictal clips of seizures in their entirety. The testing set consisted of sequentially-sampled 1-second clips spanning several minutes before the seizure through seizure termination. The winning algorithms from Kaggle as well as a basic reference detection algorithm were trained on the training set and used to classify clips from the testing set as ictal or interictal. We tested classification accuracy using a receiver operating characteristic (ROC) curve and the associated area-under-curve (AUC) as metrics, allowing us to compare algorithm performance by computing an average ROC across the data set for each algorithm.Results: All winning algorithms from the Kaggle competition suffered a loss in performance on the new data set. However, the first place algorithm was the most robust on new data and performed very well on the validation data set (Fig. 1). The first place algorithm outperformed the 0.6258 AUC benchmark of the standard seizure detection algorithm, achieving a 0.8297 AUC averaged across the validation data set. The second place and third place algorithms performed worse than the standard algorithm, bordering on chance classification (Fig. 2). The second place algorithm struggled with scaling up to larger data sets due to overwhelming computational load.Conclusions: Crowdsourcing is a viable mechanism for innovating seizure detection algorithms. Nonetheless, these algorithms may still fall victim to overfitting the datasets for which they were designed. Our study verifies the importance of algorithm validation on previously unseen data. Furthermore, we demonstrate a need for large, accessible stores of electrophysiology data to push seizure detection and prediction performance to a standard required for clinical translation.
Neurophysiology