1,746 research outputs found

    Convolutional Dictionary Learning through Tensor Factorization

    Get PDF
    Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning. Model parameters are estimated via CP decomposition of the observed higher order input moments. However, in many domains, additional invariances such as shift invariances exist, enforced via models such as convolutional dictionary learning. In this paper, we develop novel tensor decomposition algorithms for parameter estimation of convolutional models. Our algorithm is based on the popular alternating least squares method, but with efficient projections onto the space of stacked circulant matrices. Our method is embarrassingly parallel and consists of simple operations such as fast Fourier transforms and matrix multiplications. Our algorithm converges to the dictionary much faster and more accurately compared to the alternating minimization over filters and activation maps

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    On the structure of natural human movement

    Get PDF
    Understanding of human motor control is central to neuroscience with strong implications in the fields of medicine, robotics and evolution. It is thus surprising that the vast majority of motor control studies have focussed on human movement in the laboratory while neglecting behaviour in natural environments. We developed an experimental paradigm to quantify human behaviour in high resolution over extended periods of time in ecologically relevant environments. This allows us to discover novel insights and contradictory evidence to well-established findings obtained in controlled laboratory conditions. Using our data, we map the statistics of natural human movement and their variability between people. The variability and complexity of the data recorded in these settings required us to develop new tools to extract meaningful information in an objective, data-driven fashion. Moving from descriptive statistics to structure, we identify stable structures of movement coordination, particularly within the arm-hand area. Combining our data with numerous published findings, we argue that current hypotheses that the brain simplifies motor control problems by dimensionality reduction are too reductionist. We propose an alternative hypothesis derived from sparse coding theory, a concept which has been successfully applied to the sensory system. To investigate this idea, we develop an algorithm for unsupervised identification of sparse structures in natural movement data. Our method outperforms state-of-the-art algorithms for accuracy and data-efficiency. Applying this method to hand data reveals a dictionary of \emph{sparse eigenmotions} (SEMs) which are well preserved across multiple subjects. These are highly efficient and invariant representation of natural movement, and suggest a potential higher-order grammatical structure or ``movement language''. Our findings make a number of testable predictions about neural coding of movement in the cortex. This has direct consequences for advancing research on dextrous prosthetics and robotics, and has profound implications for our understanding of how the brain controls our body.Open Acces

    Formal Models of the Network Co-occurrence Underlying Mental Operations

    Get PDF
    International audienceSystems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-uncon-strained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition

    Cognitive Learning for Sentence Understanding

    Get PDF

    Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem

    Full text link
    In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the Rectified Gaussian Scale Mixture (R- GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student- t distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the Expectation-Maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R- SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.Comment: Under Review by IEEE Transactions on Signal Processin
    corecore