600 research outputs found

    ESTIMATION OF STRETCH REFLEX CONTRIBUTIONS OF WRIST USING SYSTEM IDENTIFICATION AND QUANTIFICATION OF TREMOR IN PARKINSON'S DISEASE PATIENTS

    Get PDF
    "The brain's motor control can be studied by characterizing the activity of spinal motor nuclei to brain control, expressed as motor unit activity recordable by surface electrodes". When a specific area is under consideration, the first step in investigation of the motor control system pertinent to it is the system identification of that specific body part or area. The aim of this research is to characterize the working of the brain's motor control system by carrying out system identification of the wrist joint area and quantifying tremor observed in Parkinson's disease patients. We employ the ARMAX system identification technique to gauge the intrinsic and reflexive components of wrist stiffness, in order to facilitate analysis of problems associated with Parkinson's disease. The intrinsic stiffness dynamics comprise majority of the total stiffness in the wrist joint and the reflexive stiffness dynamics contribute to the tremor characteristic commonly found in Parkinson's disease patients. The quantification of PD tremor entails using blind source separation of convolutive mixtures to obtain sources of tremor in patients suffering from movement disorders. The experimental data when treated with blind source separation reveals sources exhibiting the tremor frequency components of 3-8 Hz. System identification of stiffness dynamics and assessment of tremor can reveal the presence of additional abnormal neurological signs and early identification or diagnosis of these symptoms would be very advantageous for clinicians and will be instrumental to pave the way for better treatment of the disease

    Nonlinear blind mixture identification using local source sparsity and functional data clustering

    Get PDF
    International audienceIn this paper we propose several methods, using the same structure but with different criteria, for estimating the nonlinearities in nonlinear source separation. In particular and contrary to the state-of-art methods, our proposed approach uses a weak joint-sparsity sources assumption: we look for tiny temporal zones where only one source is active. This method is well suited to non-stationary signals such as speech. We extend our previous work to a more general class of nonlinear mixtures, proposing several nonlinear single-source confidence measures and several functional clustering techniques. Such approaches may be seen as extensions of linear instantaneous sparse component analysis to nonlinear mixtures. Experiments demonstrate the effectiveness and relevancy of this approach

    Two-Microphone Separation of Speech Mixtures

    Get PDF

    A stochastic algorithm for probabilistic independent component analysis

    Full text link
    The decomposition of a sample of images on a relevant subspace is a recurrent problem in many different fields from Computer Vision to medical image analysis. We propose in this paper a new learning principle and implementation of the generative decomposition model generally known as noisy ICA (for independent component analysis) based on the SAEM algorithm, which is a versatile stochastic approximation of the standard EM algorithm. We demonstrate the applicability of the method on a large range of decomposition models and illustrate the developments with experimental results on various data sets.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS499 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Source Separation for Hearing Aid Applications

    Get PDF

    Convolutive Blind Source Separation Methods

    Get PDF
    In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
    corecore