10 research outputs found

    Post Nonlinear Independent Subspace Analysis

    Get PDF
    In this paper a generalization of Post Nonlinear Independent Component Analysis (PNL-ICA) to Post Nonlinear Independent Subspace Analysis (PNL-ISA) is presented. In this framework sources to be identified can be multidimensional as well. For this generalization we prove a separability theorem: the ambiguities of this problem are essentially the same as for the linear Independent Subspace Analysis (ISA). By applying this result we derive an algorithm using the mirror structure of the mixing system. Numerical simulations are presented to illustrate the efficiency of the algorithm

    Using seismic mixtures to extract tilts and recover estimates of ground displacements

    Get PDF
    One of the goals of seismology is to understand the behaviour of the earth’s movements during the occurrence of an earthquake. This research focuses on the recovery of better estimation of the true ground displacements as the tilt components are inherent in recorded acceleration time histories. The raw acceleration time histories recorded in seismograms of the near field earthquake are contaminated by the effects of tilt time histories. The effects of tilt time histories cause non–zero baseline errors in seismic records thereby providing offset in the ground velocity although the final velocity never ends to zero and ground displacement diverges from the constant value. To perform baseline corrections it is therefore necessary to remove the tilt and noise components. Tilt separation was undertaken using a model designated the Tilt Separation – Independent Component Analysis (TS-ICA) model, and an enhanced version of the Extended Generalised Beta Distribution (EGBD) model. Several source distributions such as Normal, Gaussian, Non-Gaussian, Sub and Super Gaussian and skewed distribution with zero kurtosis has been modelled using EGBD and separated using EGBD-ICA. In order to refine the EGBD-ICA model, a randomised mixing matrix was introduced in the existing EGBD-ICA model using MATLAB. With the introduction of the mixing matrix, the consistency of the source separation has improved and particularly tilt separation was convincing for both artificial tilt separation from the Hector Mine earthquake data and real-time tilt separation from the real-time acceleration time histories. The tilt separation and de-noising by the TS-ICA model has given better estimates of ground displacement than the tilt contaminated ground displacement. The estimated tilt angle can provide further scope for seismic scientists and civil engineers to improve their understanding of the tilt behaviour during an earthquake and can add another dimension to their research by making it possible to improve the stability of the building structures in the seismically active regions and areas which are potentially prone to earthquake

    Unsupervised Learning of Latent Structure from Linear and Nonlinear Measurements

    Get PDF
    University of Minnesota Ph.D. dissertation. June 2019. Major: Electrical Engineering. Advisor: Nicholas Sidiropoulos. 1 computer file (PDF); xii, 118 pages.The past few decades have seen a rapid expansion of our digital world. While early dwellers of the Internet exchanged simple text messages via email, modern citizens of the digital world conduct a much richer set of activities online: entertainment, banking, booking for restaurants and hotels, just to name a few. In our digitally enriched lives, we not only enjoy great convenience and efficiency, but also leave behind massive amounts of data that offer ample opportunities for improving these digital services, and creating new ones. Meanwhile, technical advancements have facilitated the emergence of new sensors and networks, that can measure, exchange and log data about real world events. These technologies have been applied to many different scenarios, including environmental monitoring, advanced manufacturing, healthcare, and scientific research in physics, chemistry, bio-technology and social science, to name a few. Leveraging the abundant data, learning-based and data-driven methods have become a dominating paradigm across different areas, with data analytics driving many of the recent developments. However, the massive amount of data also bring considerable challenges for analytics. Among them, the collected data are often high-dimensional, with the true knowledge and signal of interest hidden underneath. It is of great importance to reduce data dimension, and transform the data into the right space. In some cases, the data are generated from certain generative models that are identifiable, making it possible to reduce the data back to the original space. In addition, we are often interested in performing some analysis on the data after dimensionality reduction (DR), and it would be helpful to be mindful about these subsequent analysis steps when performing DR, as latent structures can serve as a valuable prior. Based on this reasoning, we develop two methods, one for the linear generative model case, and the other one for the nonlinear case. In a related setting, we study parameter estimation under unknown nonlinear distortion. In this case, the unknown nonlinearity in measurements poses a severe challenge. In practice, various mechanisms can introduce nonlinearity in the measured data. To combat this challenge, we put forth a nonlinear mixture model, which is well-grounded in real world applications. We show that this model is in fact identifiable up to some trivial indeterminancy. We develop an efficient algorithm to recover latent parameters of this model, and confirm the effectiveness of our theory and algorithm via numerical experiments

    Processus gaussiens pour la séparation de sources et le codage informé

    Get PDF
    La séparation de sources est la tâche qui consiste à récupérer plusieurs signaux dont on observe un ou plusieurs mélanges. Ce problème est particulièrement difficile et de manière à rendre la séparation possible, toute information supplémentaire connue sur les sources ou le mélange doit pouvoir être prise en compte. Dans cette thèse, je propose un formalisme général permettant d inclure de telles connaissances dans les problèmes de séparation, où une source est modélisée comme la réalisation d un processus gaussien. L approche a de nombreux intérêts : elle généralise une grande partie des méthodes actuelles, elle permet la prise en compte de nombreux a priori et les paramètres du modèle peuvent être estimés efficacement. Ce cadre théorique est appliqué à la séparation informée de sources audio, où la séparation est assistée d'une information annexe calculée en amont de la séparation, lors d une phase préliminaire où à la fois le mélange et les sources sont disponibles. Pour peu que cette information puisse se coder efficacement, cela rend possible des applications comme le karaoké ou la manipulation des différents instruments au sein d'un mix à un coût en débit bien plus faible que celui requis par la transmission séparée des sources. Ce problème de la séparation informée s apparente fortement à un problème de codage multicanal. Cette analogie permet de placer la séparation informée dans un cadre théorique plus global où elle devient un problème de codage particulier et bénéficie à ce titre des résultats classiques de la théorie du codage, qui permettent d optimiser efficacement les performances.Source separation consists in recovering different signals that are only observed through their mixtures. To solve this difficult problem, any available prior information about the sources must be used so as to better identify them among all possible solutions. In this thesis, I propose a general framework, which permits to include a large diversity of prior information into source separation. In this framework, the sources signals are modeled as the outcomes of independent Gaussian processes, which are powerful and general nonparametric Bayesian models. This approach has many advantages: it permits the separation of sources defined on arbitrary input spaces, it permits to take many kinds of prior knowledge into account and also leads to automatic parameters estimation. This theoretical framework is applied to the informed source separation of audio sources. In this setup, a side-information is computed beforehand on the sources themselves during a so-called encoding stage where both sources and mixtures are available. In a subsequent decoding stage, the sources are recovered using this information and the mixtures only. Provided this information can be encoded efficiently, it permits popular applications such as karaoke or active listening using a very small bitrate compared to separate transmission of the sources. It became clear that informed source separation is very akin to a multichannel coding problem. With this in mind, it was straightforwardly cast into information theory as a particular source-coding problem, which permits to derive its optimal performance as rate-distortion functions as well as practical coding algorithms achieving these bounds.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    Identifiability of post-nonlinear mixtures

    No full text
    Abstract—This letter deals with the resolution of the blind source separation problem using the independent component analysis method in post-nonlinear mixtures. Using the sole hypothesis of the source independence is not obvious to reconstruct the sources in nonlinear mixtures. Here, we prove the identifiability under weak assumptions on the mixture matrix and density sources. Index Terms—Blind source separation, identifiability, independent component analysis (ICA), post nonlinear mixture. I

    Identifiability of post-nonlinear mixtures

    No full text
    corecore