117 research outputs found

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Riemannian approaches in Brain-Computer Interfaces: a review

    Get PDF
    International audienceAlthough promising from numerous applications, current Brain-Computer Interfaces (BCIs) still suffer from a number of limitations. In particular, they are sensitive to noise, outliers and the non-stationarity of ElectroEncephaloGraphic (EEG) signals, they require long calibration times and are not reliable. Thus, new approaches and tools, notably at the EEG signal processing and classification level, are necessary to address these limitations. Riemannian approaches, spearheaded by the use of covariance matrices, are such a very promising tool slowly adopted by a growing number of researchers. This article, after a quick introduction to Riemannian geometry and a presentation of the BCI-relevant manifolds, reviews how these approaches have been used for EEG-based BCI, in particular for feature representation and learning, classifier design and calibration time reduction. Finally, relevant challenges and promising research directions for EEG signal classification in BCIs are identified, such as feature tracking on manifold or multi-task learning

    Sparse machine learning methods with applications in multivariate signal processing

    Get PDF
    This thesis details theoretical and empirical work that draws from two main subject areas: Machine Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application of sparse machine learning methods to multivariate signal processing. In particular, methods that enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility. The methods presented can be seen as modular building blocks that can be applied to a variety of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set of multivariate signals. In addition to testing on benchmark datasets, a series of empirical evaluations on real world datasets were carried out. These included: the classification of musical genre from polyphonic audio files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed Sensing (CS); analysis of human perception of different modulations of musical key from Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate the efficacy of the framework and highlight interesting directions of future research

    Représentations parcimonieuses pour les signaux multivariés

    Get PDF
    Dans cette thèse, nous étudions les méthodes d'approximation et d'apprentissage qui fournissent des représentations parcimonieuses. Ces méthodes permettent d'analyser des bases de données très redondantes à l'aide de dictionnaires d'atomes appris. Etant adaptés aux données étudiées, ils sont plus performants en qualité de représentation que les dictionnaires classiques dont les atomes sont définis analytiquement. Nous considérons plus particulièrement des signaux multivariés résultant de l'acquisition simultanée de plusieurs grandeurs, comme les signaux EEG ou les signaux de mouvements 2D et 3D. Nous étendons les méthodes de représentations parcimonieuses au modèle multivarié, pour prendre en compte les interactions entre les différentes composantes acquises simultanément. Ce modèle est plus flexible que l'habituel modèle multicanal qui impose une hypothèse de rang 1. Nous étudions des modèles de représentations invariantes : invariance par translation temporelle, invariance par rotation, etc. En ajoutant des degrés de liberté supplémentaires, chaque noyau est potentiellement démultiplié en une famille d'atomes, translatés à tous les échantillons, tournés dans toutes les orientations, etc. Ainsi, un dictionnaire de noyaux invariants génère un dictionnaire d'atomes très redondant, et donc idéal pour représenter les données étudiées redondantes. Toutes ces invariances nécessitent la mise en place de méthodes adaptées à ces modèles. L'invariance par translation temporelle est une propriété incontournable pour l'étude de signaux temporels ayant une variabilité temporelle naturelle. Dans le cas de l'invariance par rotation 2D et 3D, nous constatons l'efficacité de l'approche non-orientée sur celle orientée, même dans le cas où les données ne sont pas tournées. En effet, le modèle non-orienté permet de détecter les invariants des données et assure la robustesse à la rotation quand les données tournent. Nous constatons aussi la reproductibilité des décompositions parcimonieuses sur un dictionnaire appris. Cette propriété générative s'explique par le fait que l'apprentissage de dictionnaire est une généralisation des K-means. D'autre part, nos représentations possèdent de nombreuses invariances, ce qui est idéal pour faire de la classification. Nous étudions donc comment effectuer une classification adaptée au modèle d'invariance par translation, en utilisant des fonctions de groupement consistantes par translation.In this thesis, we study approximation and learning methods which provide sparse representations. These methods allow to analyze very redundant data-bases thanks to learned atoms dictionaries. Being adapted to studied data, they are more efficient in representation quality than classical dictionaries with atoms defined analytically. We consider more particularly multivariate signals coming from the simultaneous acquisition of several quantities, as EEG signals or 2D and 3D motion signals. We extend sparse representation methods to the multivariate model, to take into account interactions between the different components acquired simultaneously. This model is more flexible that the common multichannel one which imposes a hypothesis of rank 1. We study models of invariant representations: invariance to temporal shift, invariance to rotation, etc. Adding supplementary degrees of freedom, each kernel is potentially replicated in an atoms family, translated at all samples, rotated at all orientations, etc. So, a dictionary of invariant kernels generates a very redundant atoms dictionary, thus ideal to represent the redundant studied data. All these invariances require methods adapted to these models. Temporal shift-invariance is an essential property for the study of temporal signals having a natural temporal variability. In the 2D and 3D rotation invariant case, we observe the efficiency of the non-oriented approach over the oriented one, even when data are not revolved. Indeed, the non-oriented model allows to detect data invariants and assures the robustness to rotation when data are revolved. We also observe the reproducibility of the sparse decompositions on a learned dictionary. This generative property is due to the fact that dictionary learning is a generalization of K-means. Moreover, our representations have many invariances that is ideal to make classification. We thus study how to perform a classification adapted to the shift-invariant model, using shift-consistent pooling functions.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Nonparametric Estimation of Distributional Functionals and Applications.

    Full text link
    Distributional functionals are integrals of functionals of probability densities and include functionals such as information divergence, mutual information, and entropy. Distributional functionals have many applications in the fields of information theory, statistics, signal processing, and machine learning. Many existing nonparametric distributional functional estimators have either unknown convergence rates or are difficult to implement. In this thesis, we consider the problem of nonparametrically estimating functionals of distributions when only a finite population of independent and identically distributed samples are available from each of the unknown, smooth, d-dimensional distributions. We derive mean squared error (MSE) convergence rates for leave-one-out kernel density plug-in estimators and k-nearest neighbor estimators of these functionals. We then extend the theory of optimally weighted ensemble estimation to obtain estimators that achieve the parametric MSE convergence rate when the densities are sufficiently smooth. These estimators are simple to implement and do not require knowledge of the densities’ support set, in contrast with many competing estimators. The asymptotic distribution of these estimators is also derived. The utility of these estimators is demonstrated through their application to sunspot image data and neural data measured from epilepsy patients. Sunspot images are clustered by estimating the divergence between the underlying probability distributions of image pixel patches. The problem of overfitting is also addressed in both applications by performing dimensionality reduction via intrinsic dimension estimation and by benchmarking classification via Bayes error estimationPhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133394/1/krmoon_1.pd

    Motion Artifact Processing Techniques for Physiological Signals

    Get PDF
    The combination of reducing birth rate and increasing life expectancy continues to drive the demographic shift toward an ageing population and this is placing an ever-increasing burden on our healthcare systems. The urgent need to address this so called healthcare \time bomb" has led to a rapid growth in research into ubiquitous, pervasive and distributed healthcare technologies where recent advances in signal acquisition, data storage and communication are helping such systems become a reality. However, similar to recordings performed in the hospital environment, artifacts continue to be a major issue for these systems. The magnitude and frequency of artifacts can vary signicantly depending on the recording environment with one of the major contributions due to the motion of the subject or the recording transducer. As such, this thesis addresses the challenges of the removal of this motion artifact removal from various physiological signals. The preliminary investigations focus on artifact identication and the tagging of physiological signals streams with measures of signal quality. A new method for quantifying signal quality is developed based on the use of inexpensive accelerometers which facilitates the appropriate use of artifact processing methods as needed. These artifact processing methods are thoroughly examined as part of a comprehensive review of the most commonly applicable methods. This review forms the basis for the comparative studies subsequently presented. Then, a simple but novel experimental methodology for the comparison of artifact processing techniques is proposed, designed and tested for algorithm evaluation. The method is demonstrated to be highly eective for the type of artifact challenges common in a connected health setting, particularly those concerned with brain activity monitoring. This research primarily focuses on applying the techniques to functional near infrared spectroscopy (fNIRS) and electroencephalography (EEG) data due to their high susceptibility to contamination by subject motion related artifact. Using the novel experimental methodology, complemented with simulated data, a comprehensive comparison of a range of artifact processing methods is conducted, allowing the identication of the set of the best performing methods. A novel artifact removal technique is also developed, namely ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA), which provides the best results when applied on fNIRS data under particular conditions. Four of the best performing techniques were then tested on real ambulatory EEG data contaminated with movement artifacts comparable to those observed during in-home monitoring. It was determined that when analysing EEG data, the Wiener lter is consistently the best performing artifact removal technique. However, when employing the fNIRS data, the best technique depends on a number of factors including: 1) the availability of a reference signal and 2) whether or not the form of the artifact is known. It is envisaged that the use of physiological signal monitoring for patient healthcare will grow signicantly over the next number of decades and it is hoped that this thesis will aid in the progression and development of artifact removal techniques capable of supporting this growth

    Learning Multimodal Structures in Computer Vision

    Get PDF
    A phenomenon or event can be received from various kinds of detectors or under different conditions. Each such acquisition framework is a modality of the phenomenon. Due to the relation between the modalities of multimodal phenomena, a single modality cannot fully describe the event of interest. Since several modalities report on the same event introduces new challenges comparing to the case of exploiting each modality separately. We are interested in designing new algorithmic tools to apply sensor fusion techniques in the particular signal representation of sparse coding which is a favorite methodology in signal processing, machine learning and statistics to represent data. This coding scheme is based on a machine learning technique and has been demonstrated to be capable of representing many modalities like natural images. We will consider situations where we are not only interested in support of the model to be sparse, but also to reflect a-priorily known knowledge about the application in hand. Our goal is to extract a discriminative representation of the multimodal data that leads to easily finding its essential characteristics in the subsequent analysis step, e.g., regression and classification. To be more precise, sparse coding is about representing signals as linear combinations of a small number of bases from a dictionary. The idea is to learn a dictionary that encodes intrinsic properties of the multimodal data in a decomposition coefficient vector that is favorable towards the maximal discriminatory power. We carefully design a multimodal representation framework to learn discriminative feature representations by fully exploiting, the modality-shared which is the information shared by various modalities, and modality-specific which is the information content of each modality individually. Plus, it automatically learns the weights for various feature components in a data-driven scheme. In other words, the physical interpretation of our learning framework is to fully exploit the correlated characteristics of the available modalities, while at the same time leverage the modality-specific character of each modality and change their corresponding weights for different parts of the feature in recognition
    • …
    corecore