Crowdsourcing Reproducible Seizure Detection
Abstract number :
3.058
Submission category :
1. Translational Research: 1B. Models
Year :
2016
Submission ID :
195724
Source :
www.aesnet.org
Presentation date :
12/5/2016 12:00:00 AM
Published date :
Nov 21, 2016, 18:00 PM
Authors :
Steven N. Baldassano, University of Pennsylvania, Philadelphia, Pennsylvania; Tyler Blevins, University of Pennsylvania; Benjamin H. Brinkmann, Mayo Clinic, Rochester, Minnesota; Ankit N. Khambhati, University of Pennsylvania; Joost Wagenaar, University o
Rationale: There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. For patients with medically refractory epilepsy, responsive neurostimulation using an implanted device represents a promising therapy capable of aborting seizures before manifestation of clinical symptoms. However, the efficacy of these devices hinges on accurate, early detection of seizures not often achieved by current commercial seizure detection algorithms. An important handicap to the development of robust, patient-specific algorithms for seizure detection is that only a small number of data scientists have access to high-quality, expertly annotated seizure data from prolonged recordings. We aim to address this limitation by publicly providing EEG data to crowdsource seizure detection algorithm design. Methods: A Kaggle.com competition was hosted by our lab in collaboration with researchers at Mayo Clinic, and sponsored by NINDS and AES, to develop the best, personalized seizure detection algorithms. This platform allowed for crowdsourcing optimal solutions by providing easily accessible, annotated recordings to data scientists. We evaluated submitted algorithms using human and canine data collected at the Hospital of the University of Pennsylvania and Mayo Clinic, and continuous recordings from humans implanted with the NeuroVista seizure prediction device. Performance was measured using ROC curves for classification of 1-second epochs as non-seizure, seizure, or "early seizure," defined as the first 15 seconds of a seizure. The efficacy of the top three performing solutions was then validated on additional patient data on http://ieeg.org not included in the original competition to assess algorithm robustness. Results: Two hundred teams of data scientists from all over the world participated in the Kaggle.com competition. The top performing teams submitted highly accurate algorithms with the top three teams achieving AUC values of 0.963, 0.957, and 0.956 in the competition. These algorithms achieved similar performance in the validation study, with AUC values of 0.962, 0.948, and 0.958 respectively. The efficacy of these seizure detection algorithms far exceeds the current industry standard and represents a new benchmark for personalized seizure detection. The code for the top three algorithms and an evaluation pipeline have been made publicly available to allow other researchers to easily employ these algorithms on their own datasets, and to develop and test new algorithms against these benchmarks. Conclusions: The success of this competition demonstrates how sharing high quality data on a central, collaborative platform can create powerful translational tools for clinical patient care. Open access to data and methods is essential for generating reproducible detection research. We provide the top-performing algorithms from the competition, with validated performance across institutions and species, for immediate clinical and research use. Funding: Funded through the Ashton Fellowship, the Mirowski Family Foundation (U01-NS-073557-01A1), and the NIH (5-U24-NS-063930-05, UH2NS095495-01 ).
Translational Research