Fellow Baylor College of Medicine Castle Rock, Colorado
Rationale: The differentiation between epileptic seizures (ES) and non-epileptic seizures (NES) in the epilepsy monitoring unit (EMU) is a common problem facing epileptologists and makes up 20% of referred patients to tertiary epilepsy centers. While this distinction relies heavily on clinical evaluation, we hypothesized that machine learning modeling of human visual analysis using convolutional neural networks (CNN) would be able to extract features from EEG waveforms to correctly classify seizures into ES and NES. Methods: EEG Screenshots from EMU patients were taken from standard EEG reading software utilized by the clinicians and divided to seven second epochs. A total of 4,406 screenshots were used from 60 subjects with 121 combined events. The training and validation datasets included 28 subjects with 57 seizures (ES: 33, NES: 24) and 16 unique subjects with 30 seizures (ES: 14, NES: 16) respectively. The testing dataset included an additional 16 unique subjects with 34 seizures (ES: 15, NES: 19). The training dataset included screenshots using four common montages used clinically at our center (Figure 1). The validation and testing screenshots were obtained using only AP-Bipolar montage. Two CNN models were trained, one using only AP-Bipolar (1-Montage model) screenshots from the training dataset (820 images) and the other using screenshots from the four montages (4-Montage model) from the training dataset (3309 images). The CNN model architecture included 19 convolutional layers intermixed with 7 Max Pooling layers, 1 dense fully connected layer, and one 2 node output layer. The model was created and evaluated using TensorFlow library with python. Results: (1) Image classification: The 1-Montage model showed an overall 84.2% accuracy in classifying EEG waveform plot screenshots as ES vs NES. The 4-Montage model showed significant improvement in seizure classification with an overall accuracy of 91.0%. The 4-montage model performed significantly better compared to the 1-Montage model (χ2 = 29.4984, p < 0.00001. (2) Event classification: An event was classified as correct if >= 75% of images from the event were classified correctly by the model. The 1-Montage model showed 76.5% accuracy in classifying events as ES vs NES. The 4-Montage model showed 94.1% accuracy. Detailed results are shown in Table 1. Conclusions: We demonstrate a proof-of-concept that deep learning can be trained to emulate human readers in visually recognizing EEG waveforms as is commonly read in clinical practice. This work can increase EEG reading efficiency, and can lead to automated models alerting monitoring technicians to potential seizures. Our model compares favorably to prior work on wavelet decomposition of EEG waveforms with a high level of sensitivity, specificity, and accuracy. The use of additional standard montage representations of EEG waveform data significantly improves the rate of correctly classifying epileptic seizures versus non-epileptic seizures. This suggests that more robust seizure classification models can be trained without the need for additional unique ES and NES examples. Funding: Please list any funding that was received in support of this abstract.: No funding was received in support of this abstract.