6 research outputs found

    Joint Independent Subspace Analysis Using Second-Order Statistics

    No full text
    International audienceThis paper deals with a novel generalization of classical blind source separation (BSS) in two directions. First, relaxing the constraint that the latent sources must be statistically independent. This generalization is well-known and sometimes termed independent subspace analysis (ISA). Second, jointly analyzing several ISA problems, where the link is due to statistical dependence among corresponding sources in different mixtures. When the data are one-dimensional, i.e., multiple classical BSS problems, this model, known as independent vector analysis (IVA), has already been studied. In this paper, we combine IVA with ISA and term this new model joint independent subspace analysis (JISA). We provide full performance analysis of JISA, including closed-form expressions for minimal mean square error (MSE), Fisher information and Cramér-Rao lower bound, in the separation of Gaussian data. The derived MSE applies also for non-Gaussian data, when only second-order statistics are used. We generalize previously known results on IVA, including its ability to uniquely resolve instantaneous mixtures of real Gaussian stationary data, and having the same arbitrary permutation at all mixtures. Numerical experiments validate our theoretical results and show the gain with respect to two competing approaches that either use a finer block partition or a different norm

    Nonlinear Filtering based on Log-homotopy Particle Flow : Methodological Clarification and Numerical Evaluation

    Get PDF
    The state estimation of dynamical systems based on measurements is an ubiquitous problem. This is relevant in applications like robotics, industrial manufacturing, computer vision, target tracking etc. Recursive Bayesian methodology can then be used to estimate the hidden states of a dynamical system. The procedure consists of two steps: a process update based on solving the equations modelling the state evolution, and a measurement update in which the prior knowledge about the system is improved based on the measurements. For most real world systems, both the evolution and the measurement models are nonlinear functions of the system states. Additionally, both models can also be perturbed by random noise sources, which could be non-Gaussian in their nature. Unlike linear Gaussian models, there does not exist any optimal estimation scheme for nonlinear/non-Gaussian scenarios. This thesis investigates a particular method for nonlinear and non-Gaussian data assimilation, termed as the log-homotopy based particle flow. Practical filters based on such flows have been known in the literature as Daum Huang filters (DHF), named after the developers. The key concept behind such filters is the gradual inclusion of measurements to counter a major drawback of single step update schemes like the particle filters i.e. namely the degeneracy. This could refer to a situation where the likelihood function has its probability mass well seperated from the prior density, and/or is peaked in comparison. Conventional sampling or grid based techniques do not perform well under such circumstances and in order to achieve a reasonable accuracy, could incur a high processing cost. DHF is a sampling based scheme, which provides a unique way to tackle this challenge thereby lowering the processing cost. This is achieved by dividing the single measurement update step into multiple sub steps, such that particles originating from their prior locations are graduated incrementally until they reach their final locations. The motion is controlled by a differential equation, which is numerically solved to yield the updated states. DH filters, even though not new in the literature, have not been fully explored in the detail yet. They lack the in-depth analysis that the other contemporary filters have gone through. Especially, the implementation details for the DHF are very application specific. In this work, we have pursued four main objectives. The first objective is the exploration of theoretical concepts behind DHF. Secondly, we build an understanding of the existing implementation framework and highlight its potential shortcomings. As a sub task to this, we carry out a detailed study of important factors that affect the performance of a DHF, and suggest possible improvements for each of those factors. The third objective is to use the improved implementation to derive new filtering algorithms. Finally, we have extended the DHF theory and derived new flow equations and filters to cater for more general scenarios. Improvements in the implementation architecture of a standard DHF is one of the key contributions of this thesis. The scope of the applicability of DHF is expanded by combining it with other schemes like the Sequential Markov chain Monte Carlo and the tensor decomposition based solution of the Fokker Planck equation, resulting in the development of new nonlinear filtering algorithms. The standard DHF, using improved implementation and the newly derived algorithms are tested in challenging simulated test scenarios. Detailed analysis have been carried out, together with the comparison against more established filtering schemes. Estimation error and the processing time are used as important performance parameters. We show that our new filtering algorithms exhibit marked performance improvements over the traditional schemes

    Decentralized Ambient System Identification of Structures

    Get PDF
    Many of the existing ambient modal identification methods based on vibration data process information centrally to calculate the modal properties. Such methods demand relatively large memory and processing capabilities to interrogate the data. With the recent advances in wireless sensor technology, it is now possible to process information on the sensor itself. The decentralized information so obtained from individual sensors can be combined to estimate the global modal information of the structure. The main objective of this thesis is to present a new class of decentralized algorithms that can address the limitations stated above. The completed work in this regard involves casting the identification problem within the framework of underdetermined blind source separation (BSS). Time-frequency transformations of measurements are carried out, resulting in a sparse representation of the signals. Stationary wavelet packet transform (SWPT) is used as the primary means to obtain a sparse representation in the time-frequency domain. Several partial setups are used to obtain the partial modal information, which are then combined to obtain the global structural mode information. Most BSS methods in the context of modal identification assume that the excitation is white and do not contain narrow band excitation frequencies. However, this assumption is not satisfied in many situations (e.g., pedestrian bridges) when the excitation is a superposition of narrow-band harmonic(s) and broad-band disturbance. Under such conditions, traditional BSS methods yield sources (modes) without any indication as to whether the identified source(s) is a system or an excitation harmonic. In this research, a novel under-determined BSS algorithm is developed involving statistical characterization of the sources which are used to delineate the sources corresponding to external disturbances versus intrinsic modes of the system. Moreover, the issue of computational burden involving an over-complete dictionary of sparse bases is alleviated through a new underdetermined BSS method based on a tensor algebra tool called PARAllel FACtor (PARAFAC) decomposition. At the core of this method, the wavelet packet decomposition coefficients are used to form a covariance tensor, followed by PARAFAC tensor decomposition to separate the modal responses. Finally, the proposed methods are validated using measurements obtained from both wired and wireless sensors on laboratory scale and full scale buildings and bridges

    Scientific Report 2002 / 2003

    Get PDF
    corecore