2,055 research outputs found
Local brain connectivity and associations with gender and age
ABSTRACTRegional homogeneity measures synchrony of resting-state brain activity in neighboring voxels, or local connectivity. The effects of age and gender on local connectivity in healthy subjects are unknown. We performed regional homogeneity analyses on resting state BOLD time series data acquired from 58 normal, healthy participants, ranging in age from 11 to 35 (mean 18.1±5.0 years, 32 males). Regional homogeneity was found to be highest for gray matter, with brain regions within the default mode network having the highest local connectivity values. There was a general decrease in regional homogeneity with age with the greatest reduction seen in the anterior cingulate and temporal lobe. Greater female local connectivity in the right hippocampus and amygdala was also noted, regardless of age. These findings suggest that local connectivity at the millimeter scale decreases during development as longer connections are formed, and underscores the importance of examining gender differences in imaging studies of healthy and clinical populations
Sleep Stage Classification: A Deep Learning Approach
Sleep occupies significant part of human life. The diagnoses of sleep related disorders are of great importance. To record specific physical and electrical activities of the brain and body, a multi-parameter test, called polysomnography (PSG), is normally used. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, automatic classification algorithms were developed.
In this research work, we focused on pre-processing (filtering boundaries and de-noising algorithms) and classification steps of automatic sleep stage classification. The main motivation for this work was to develop a pre-processing and classification framework to clean the input EEG signal without manipulating the original data thus enhancing the learning stage of deep learning classifiers.
For pre-processing EEG signals, a lossless adaptive artefact removal method was proposed. Rather than other works that used artificial noise, we used real EEG data contaminated with EOG and EMG for evaluating the proposed method. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy. In the classification area, we evaluated the performance of the most common sleep stage classifiers using a comprehensive set of features extracted from PSG signals. Considering the challenges and limitations of conventional methods, we proposed two deep learning-based methods for classification of sleep stages based on Stacked Sparse AutoEncoder (SSAE) and Convolutional Neural Network (CNN). The proposed methods performed more efficiently by eliminating the need for conventional feature selection and feature extraction steps respectively. Moreover, although our systems were trained with lower number of samples compared to the similar studies, they were able to achieve state of art accuracy and higher overall sensitivity
Brain functional networks in syndromic and non-syndromic autism: a graph theoretical study of EEG connectivity
Background
Graph theory has been recently introduced to characterize complex brain networks, making it highly suitable to investigate altered connectivity in neurologic disorders. A current model proposes autism spectrum disorder (ASD) as a developmental disconnection syndrome, supported by converging evidence in both non-syndromic and syndromic ASD. However, the effects of abnormal connectivity on network properties have not been well studied, particularly in syndromic ASD. To close this gap, brain functional networks of electroencephalographic (EEG) connectivity were studied through graph measures in patients with Tuberous Sclerosis Complex (TSC), a disorder with a high prevalence of ASD, as well as in patients with non-syndromic ASD.
Methods
EEG data were collected from TSC patients with ASD (n = 14) and without ASD (n = 29), from patients with non-syndromic ASD (n = 16), and from controls (n = 46). First, EEG connectivity was characterized by the mean coherence, the ratio of inter- over intra-hemispheric coherence and the ratio of long- over short-range coherence. Next, graph measures of the functional networks were computed and a resilience analysis was conducted. To distinguish effects related to ASD from those related to TSC, a two-way analysis of covariance (ANCOVA) was applied, using age as a covariate.
Results
Analysis of network properties revealed differences specific to TSC and ASD, and these differences were very consistent across subgroups. In TSC, both with and without a concurrent diagnosis of ASD, mean coherence, global efficiency, and clustering coefficient were decreased and the average path length was increased. These findings indicate an altered network topology. In ASD, both with and without a concurrent diagnosis of TSC, decreased long- over short-range coherence and markedly increased network resilience were found.
Conclusions
The altered network topology in TSC represents a functional correlate of structural abnormalities and may play a role in the pathogenesis of neurological deficits. The increased resilience in ASD may reflect an excessively degenerate network with local overconnection and decreased functional specialization. This joint study of TSC and ASD networks provides a unique window to common neurobiological mechanisms in autism
Impact of alpha-synuclein spreading on the nigrostriatal dopaminergic pathway depends on the onset of the pathology
Misfolded alpha-synuclein spreads along anatomically connected areas through the brain, prompting progressive neurodegeneration of the nigrostriatal pathway in Parkinson's disease. To investigate the impact of early stage seeding and spreading of misfolded alpha-synuclein along with the nigrostriatal pathway, we studied the pathophysiologic effect induced by a single acute alpha-synuclein preformed fibrils (PFFs) inoculation into the midbrain. Further, to model the progressive vulnerability that characterizes the dopamine (DA) neuron life span, we used two cohorts of mice with different ages: 2-month-old (young) and 5-month-old (adult) mice. Two months after a-synuclein PFFs injection, we found that striatal DA release decreased exclusively in adult mice. Adult DA neurons showed an increased level of pathology spreading along with the nigrostriatal pathway accompanied with a lower volume of alpha-synuclein deposition in the midbrain, impaired neurotransmission, rigid DA terminal composition, and less microglial reactivity compared with young neurons. Notably, preserved DA release and increased microglial coverage in the PFFs-seeded hemisphere coexist with decreased large-sized terminal density in young DA neurons. This suggests the presence of a targeted pruning mechanism that limits the detrimental effect of alpha-synuclein early spreading. This study suggests that the impact of the pathophysiology caused by misfolded alpha-synuclein spreading along the nigrostriatal pathway depends on the age of the DA network, reducing striatal DA release specifically in adult mice
Recommended from our members
Model-Architecture Co-design of Deep Neural Networks for Embedded Systems
In deep learning, a convolutional neural network (ConvNet or CNN) is a powerful tool for building interesting embedded applications that use data to make predictions. An application running on an embedded system typically has limited access to memory resources, processing power, and storage. Implementing deep convolutional neural network-based inference on resource-constrained devices can be very challenging, as these environments cannot usually make use of the massive computing power and storage that are present in cloud server environments. Furthermore, the constantly evolving nature of modern deep network architecture aggravates the problem by making it necessary to balance flexibility against specialisation to avoid the inability to adapt. However, much of the baseline architecture of a deep convolutional neural network stayed the same. With careful optimisation of the most common and widely occurring layer architectures, it is typically possible to accelerate these emerging workloads for resource-constrained embedded systems.
This thesis makes four contributions. I first developed a lossy three-stage low-rank approximation scheme that can reduce the computational complexity of a pre-trained model by 3-5x and up to 8-9x for individual convolutional layers. This scheme requires restructuring of the convolutional layers and generally suits the scenario where both the training data and trained model are available.
In many scenarios, the training data is not available for fine-tuning any loss in prediction accuracy if structural changes are made to a model as a post-processing step. Besides the lack of availability of training data, there are other situations where the architecture of a model cannot be changed after training. My second contribution handles this scenario by using a low-level optimisation scheme that requires no changes to the model architecture, unlike the low-rank approximation scheme. This novel scheme uses a modified version of the Cook-Toom algorithm to reduce the computational intensity of commonly occurring dense and spatial convolutional layers and speedup inference time by 2-4x.
My third contribution is an efficient implementation of the Cook-Toom class of algorithms on ubiquitous Arm's low-power Cortex processor. Unlike the direct convolution, computing convolutions using the modified Cook-Toom algorithm requires a different data processing pipeline as it involves pre- and post-transformations of the intermediate activations. I introduced a multi-channel multi-region (MCMR) scheme to enable an efficient implementation of the fast Cook-Toom algorithm. I demonstrate that by effectively using SIMD instructions and the MCMR scheme an average 2-3x and a peak 4x per layer speedup is easily achievable.
My final contribution is the Cook-Toom accelerator, a custom hardware architecture for modern convolutional neural networks. This accelerator architecture is designed from the ground up to address some of the limitations of a resource-constrained SIMD processor. I also illustrate how new emerging layer types can be mapped efficiently to the same flexible architecture without any modification
- …