Machine Learning in Video-Based Monitoring of Epilepsy Patients: Feasibility in seizure detection, classification and documentation

Petri Ojanen

Research output: Book/ReportDoctoral thesisCollection of Articles

Abstract

Background: Seizure diaries have traditionally been used in follow-up and management of epilepsy. According to previous research, seizure diaries are prone to inaccuracies due to the inability to observe seizures and inter-observer discrepancy. Due to the increased risk of mortality and morbidity caused by unobserved seizures especially during the nighttime, improved seizure documentation is indicated. Video-based monitoring devices hold potential to increase accuracy of seizure documentation and classification, which may improve patient care and treatment efficacy. Automatic seizure detection, especially video- based detection devices, have been studied increasingly during the last decade.

Aims of the study: The main objective of this research was to examine automatic detection and classification of seizures by utilizing machine learning in the analysis of motion signals extracted from the video data. Other objectives were to assess the feasibility of video monitoring in drug intervention by comparing the video-based seizure documentation with patient-provided seizure diaries, and to examine changes in algorithmically-evaluated movement intensity over time.

Materials and methods: This research consisted of four studies. Overall 46 patients with drug resistant epilepsy participated in one or multiple studies. All patients underwent a four or eight-week nighttime home video monitoring. In all studies, feature extraction was used for the video data recorded from patients to obtain three different biosignals - sudden motion, oscillation, and changes in sound volume - to characterize seizure activity. In studies III and IV, the brivaracetam intervention was examined. In the first study, the method was applied to detect seizures of one patient from a 4-week home monitoring. In the second study, accuracy of automatic seizure classification was evaluated by utilizing motion features from the motion feature collection (catch22) to create time series for clustering and visualization (hyperkinetic, tonic, and tonic-clonic seizures were included). The training dataset included 130 seizures from 10 patients, and the testing dataset included 98 seizures from 17 patients. In the third study, the seizure documentation of seizure diaries and automatic video-based seizure monitoring was compared and the effect of documentation method on the interpretation of treatment outcomes was evaluated. The study sample included 13 patients. The study also evaluated qualitative changes in movement intensity before and after the intervention by utilizing feature analysis. In the fourth study, the characteristics of signal profiles were explored to evaluate the generalizability and variability in a single patient setting and between patients. Video data of the fourth study consisted of 13 hyperkinetic seizures, 65 tonic seizures, 13 tonic-clonic seizures and 138 motor seizures from 11 patients.

Results: In the first study, with optimal parameters and thresholding that were set to 90% sensitivity, the model reached false discovery rate of 0.38/h and 1.02/h for seizures with a clonic component and seizures with a tonic component, respectively. Motion, oscillation and sound signals formed distinguishable signal profiles characteristic for different seizure types. In the second study, temporal motion features achieved the best results in clustering analysis, and the system differentiated hyperkinetic, tonic, and tonic-clonic seizures with an accuracy of 91%, 88%, and 45% after 100 cross-validation runs, respectively. F1-scores were 93%, 90% and 37%, respectively, and overall accuracy and f1-score were both 74%. In the third study, during the follow-up phase of intervention, three patients reached >50% decrease in seizure frequency, four patients did not respond to intervention, and seizure frequency increased in two patients according to the results from the video monitoring. Five out of nine patients documented 40 to 70% of their seizures to their seizure diaries compared to video monitoring system. Signal feature analysis showed significant changes in movement intensity in three patients, and statistically significant differences in features were found in 8 out of 9 patients. In the fourth study, tonic component formed a distinguishable seizure signature in motion signal, but hyperkinetic and motor seizures have overlapping signal profile characteristics which might hamper their differentiation. Visually recognizable changes were observed in the signal profiles of two patients after the initiation of brivaracetam. Motion signals might be useful in the assessment of movement intensity changes and to evaluate the treatment effect.

Conclusions: Automatic video-based seizure monitoring was able to automatically detect motor seizures and differentiate tonic, clonic, and tonic-clonic seizures by using sudden motion, oscillation and sound features extracted from the video data. Signal profiles of different motor seizure types might be useful in seizure classification and further development of this system. Video monitoring increased sensitivity of seizure detection in compared to seizure diaries which improved the treatment outcome evaluation. The video-based system also enabled feature analysis and visualization of signal profiles in movement intensity evaluation after the initiation of brivaracetam.
Original languageEnglish
Place of PublicationTampere
ISBN (Electronic)978-952-03-3444-4
Publication statusPublished - 2024
Publication typeG5 Doctoral dissertation (articles)

Publication series

NameTampere University Dissertations - Tampereen yliopiston väitöskirjat
Volume1024
ISSN (Print)2489-9860
ISSN (Electronic)2490-0028

Fingerprint

Dive into the research topics of 'Machine Learning in Video-Based Monitoring of Epilepsy Patients: Feasibility in seizure detection, classification and documentation'. Together they form a unique fingerprint.

Cite this