34 research outputs found

    On Statistical Analysis of Brain Variability.

    Get PDF
    We discuss what we believe could be an improvement in future discussions of the ever-changing brain. We do so by distinguishing different types of brain variability and outlining methods suitable to analyse them. We argue that, when studying brain and behaviour data, classical methods such as regression analysis and more advanced approaches both aim to decompose the total variance into sensible variance components. In parallel, we argue that a distinction needs to be made between innate and acquired brain variability. For varying high-dimensional brain data, we present methods useful to extract their low-dimensional representations. Finally, to trace potential causes and predict plausible consequences of brain variability, we discuss how to combine statistical principles and neurobiological insights to make associative, explanatory, predictive, and causal enquires; but cautions are needed to raise association- or prediction-based neurobiological findings to causal claims

    The Biological Basis of Mathematical Beauty

    Get PDF
    Our past studies have led us to divide sensory experiences, including aesthetic ones derived from sensory sources, into two broad categories: biological and artifactual. The aesthetic experience of biological beauty is dictated by inherited brain concepts, which are resistant to change even in spite of extensive experience. The experience of artifactual beauty on the other hand is determined by post-natally acquired concepts, which are modifiable throughout life by exposure to different experiences (Zeki, 2009; Zeki and Chén, 2016). Hence, in terms of aesthetic rating, biological beauty (in which we include the experience of beautiful faces or human bodies) is characterized by less variability between individuals belonging to different ethnic origins and cultural backgrounds or the same individual at different times. Artifactual beauty (in which we include the aesthetic experience of human artifacts, such as buildings and cars) is characterized by greater variability between individuals belonging to different ethnic and cultural groupings and by the same individual at different times. In this paper, we present results to show that the experience of mathematical beauty (Zeki et al., 2014), even though it constitutes an extreme example of beauty that is dependent upon (mathematical) culture and learning, is consistent with one of the characteristics of the biological categories, namely a lesser variability in terms of the aesthetic ratings given to mathematical formulae experienced as beautiful

    Statistical Quantile Learning for Large, Nonlinear, and Additive Latent Variable Models.

    Get PDF
    The studies of large-scale, high-dimensional data in fields such as genomics and neuroscience have injected new insights into science. Yet, despite advances, they are confronting several chal- lenges, often simultaneously: lack of interpretability, nonlinearity, slow computation, inconsistency and uncertain convergence, and small sample sizes compared to high feature dimensions. Here, we propose a relatively simple, scalable, and consistent nonlinear dimension reduction method that can potentially address these issues in unsupervised settings. We call this method Statistical Quantile Learning (SQL) because, methodologically, it leverages on a quantile approximation of the latent variables together with standard nonparametric techniques (sieve or penalyzed methods). We show that estimating the model simplifies into a convex assignment matching problem; we derive its asymptotic properties; we show that the model is identifiable under few conditions. Compared to its linear competitors, SQL explains more variance, yields better separation and explanation, and delivers more accurate outcome prediction. Compared to its nonlinear competitors, SQL shows considerable advantage in interpretability, ease of use and computations in large-dimensional set- tings. Finally, we apply SQL to high-dimensional gene expression data (consisting of 20, 263 genes from 801 subjects), where the proposed method identified latent factors predictive of five cancer types. The SQL package is available at https://github.com/jbodelet/SQL

    SeqSleepNet: End-to-End Hierarchical Recurrent Neural Network for Sequence-to-Sequence Automatic Sleep Staging

    Get PDF
    Automatic sleep staging has been often treated as a simple classification problem that aims at determining the label of individual target polysomnography (PSG) epochs one at a time. In this work, we tackle the task as a sequence-to-sequence classification problem that receives a sequence of multiple epochs as input and classifies all of their labels at once. For this purpose, we propose a hierarchical recurrent neural network named SeqSleepNet. At the epoch processing level, the network consists of a filterbank layer tailored to learn frequency-domain filters for preprocessing and an attention-based recurrent layer designed for short-term sequential modelling. At the sequence processing level, a recurrent layer placed on top of the learned epoch-wise features for long-term modelling of sequential epochs. The classification is then carried out on the output vectors at every time step of the top recurrent layer to produce the sequence of output labels. Despite being hierarchical, we present a strategy to train the network in an end-to-end fashion. We show that the proposed network outperforms state-of-the-art approaches, achieving an overall accuracy, macro F1-score, and Cohen's kappa of 87.1%, 83.3%, and 0.815 on a publicly available dataset with 200 subjects

    DNN Filter Bank Improves 1-Max Pooling CNN for Single-Channel EEG Automatic Sleep Stage Classification

    Get PDF
    We present in this paper an efficient convolutional neural network (CNN) running on time-frequency image features for automatic sleep stage classification. Opposing to deep architectures which have been used for the task, the proposed CNN is much simpler. However, the CNN’s convolutional layer is able to support convolutional kernels with different sizes, and therefore, capable of learning features at multiple temporal resolutions. In addition, the 1-max pooling strategy is employed at the pooling layer to better capture the shift-invariance property of EEG signals. We further propose a method to discriminatively learn a frequency-domain filter bank with a deep neural network (DNN) to preprocess the time-frequency image features. Our experiments show that the proposed 1-max pooling CNN performs comparably with the very deep CNNs in the literature on the Sleep-EDF dataset. Preprocessing the time-frequency image features with the learned filter bank before presenting them to the CNN leads to significant improvements on the classification accuracy, setting the state-of-the-art performance on the dataset

    Personalized Longitudinal Assessment of Multiple Sclerosis Using Smartphones

    Full text link
    Personalized longitudinal disease assessment is central to quickly diagnosing, appropriately managing, and optimally adapting the therapeutic strategy of multiple sclerosis (MS). It is also important for identifying the idiosyncratic subject-specific disease profiles. Here, we design a novel longitudinal model to map individual disease trajectories in an automated way using sensor data that may contain missing values. First, we collect digital measurements related to gait and balance, and upper extremity functions using sensor-based assessments administered on a smartphone. Next, we treat missing data via imputation. We then discover potential markers of MS by employing a generalized estimation equation. Subsequently, parameters learned from multiple training datasets are ensembled to form a simple, unified longitudinal predictive model to forecast MS over time in previously unseen people with MS. To mitigate potential underestimation for individuals with severe disease scores, the final model incorporates additional subject-specific fine-tuning using data from the first day. The results show that the proposed model is promising to achieve personalized longitudinal MS assessment; they also suggest that features related to gait and balance as well as upper extremity function, remotely collected from sensor-based assessments, may be useful digital markers for predicting MS over time

    Automatic Sleep Stage Classification Using Single-Channel EEG: Learning Sequential Features with Attention-Based Recurrent Neural Networks

    Get PDF
    We propose in this work a feature learning approach using deep bidirectional recurrent neural networks (RNNs) with attention mechanism for single-channel automatic sleep stage classification. We firstly decompose an EEG epoch into multiple small frames and subsequently transform them into a sequence of frame-wise feature vectors. Given the training sequences, the attention-based RNN is trained in a sequence-to-label fashion for sleep stage classification. Due to discriminative training, the network is expected to encode information of an input sequence into a high-level feature vector after the attention layer. We, therefore, treat the trained network as a feature extractor and extract these feature vectors for classification which is accomplished by a linear SVM classifier. We also propose a discriminative method to learn a filter bank with a DNN for preprocessing purpose. Filtering the frame-wise feature vectors with the learned filter bank beforehand leads to further improvement on the classification performance. The proposed approach demonstrates good performance on the Sleep-EDF dataset

    L-SeqSleepNet: Whole-cycle Long Sequence Modelling for Automatic Sleep Staging

    Full text link
    Human sleep is cyclical with a period of approximately 90 minutes, implying long temporal dependency in the sleep data. Yet, exploring this long-term dependency when developing sleep staging models has remained untouched. In this work, we show that while encoding the logic of a whole sleep cycle is crucial to improve sleep staging performance, the sequential modelling approach in existing state-of-the-art deep learning models are inefficient for that purpose. We thus introduce a method for efficient long sequence modelling and propose a new deep learning model, L-SeqSleepNet, which takes into account whole-cycle sleep information for sleep staging. Evaluating L-SeqSleepNet on four distinct databases of various sizes, we demonstrate state-of-the-art performance obtained by the model over three different EEG setups, including scalp EEG in conventional Polysomnography (PSG), in-ear EEG, and around-the-ear EEG (cEEGrid), even with a single EEG channel input. Our analyses also show that L-SeqSleepNet is able to alleviate the predominance of N2 sleep (the major class in terms of classification) to bring down errors in other sleep stages. Moreover the network becomes much more robust, meaning that for all subjects where the baseline method had exceptionally poor performance, their performance are improved significantly. Finally, the computation time only grows at a sub-linear rate when the sequence length increases.Comment: 9 pages, 4 figures, updated affiliation
    corecore