New sleep state classification method combines deep learning with WFCI

1/6/2022

Lauren Laws

Innovation can come from curiosity. Other times, it comes from necessity. This time, new sleep study research from the Computational Imaging Science Laboratory, led by bioengineering department head and HCESC researcher Mark A. Anastasio, came from a mix of both. A paper on that research, in collaboration with Washington University in St. Louis, was recently published in the Journal of Neuroscience Methods.

This study was recently published in the Journal of Neuroscience Methods.
This study was recently published in the Journal of Neuroscience Methods.

CISL and WashU have been studying sleep state classifications using wide-field calcium imaging (WFCI), a method to monitor cortex-wide dynamics with high spatial and temporal resolution while sleeping. In a first of its kind study, they found combining that with multiplex visibility graphs (MVG) and deep learning through two dimensional convolutional neural networks (CNN) to create an automated sleep state classification method had an inter-rater reliability of 0.67, which is comparable to human EEG/EMG-based scoring that are more time- consuming and invasive.

Current methods of classifying WFCI data rely on electroencephalogram (EEG) and electromyogram (EMG) signals. However, obtaining EEG/EMG signals requires placing electrodes near the surface of a mouse's brain, which is invasive and increases the risk of infection. Trained professionals are also needed to inspect those signals, which researchers say is time-consuming and involves a large number of measurements to go through. 

"It takes days for them to just sit there labeling all sleeping states, so that's where we were first brought to this idea of maybe we could try some machine learning or deep learning," said Xiaohui Zhang, a third-year Ph.D. student under Anastasio and a CISL research assistant. 

Representative Grad-CAM examples of wakefulness, NREM and REM from three mice.
Representative Grad-CAM examples of wakefulness, NREM and REM from three mice.

To do this, researchers split spatial-temporal WFCI data into ten-second epochs. Each epoch was converted to multivariate times series given the atlas information and mapped to multiplex visibility graph that benefits describing the discrete neuronal events over time. Subsequently, a 2D multi-channel CNN was then used to classify sleep states via supervised deep learning. 

The team found using the MVG-CNN method not only achieved comparable performance as manual scoring based on EEG/EMG signals, it also showed that the CNN focused on short- and long-term temporal connections of MVG in a sleep state-specific manner, as well as posterior area of cortex was more accurate than other brain regions in classifying sleep states. They also discovered using different time intervals could lead to better sleep state classification of WFCI data. 

"The conventional way when people look into the EEG, is that they use 10 seconds," said Zhang. "But we are kind of questioning, in this calcium imaging, maybe we can use less data to achieve the same or similar accuracy." 

Researchers are hopeful this method could have further applications in the study of sleep, stoke and other consciousness states. 
"One bright point of this paper is that showing the spatial-temporal information provided by the wide-field calcium imaging, it's kind of interesting or encouraging for people who want to learn more about the brain via direct neuronal activity reading,” said Zhang. “It provided better spatial and temporal resolution. You can see the field of view of whole brain different from EEG, which is just like 1D signals.”