90 research outputs found

    Robust partial-learning in linear Gaussian systems

    No full text
    International audienceThis paper deals with unsupervised and off-line learning of parameters involved in linear Gaussian systems, i.e. the estimation of the transition and the noise covariances matrices of a state-space system from a finite series of observations only. In practice, these systems are the result of a physical problem for which there is a partial knowledge either on the sensors from which the observations are issued or on the state of the studied system. We therefore propose in this work an " Expectation-Maximization " learning type algorithm that takes into account constraints on parameters such as the fact that two identical sensors have the same noise characteristics, and so estimation procedure should exploit this knowledge. The algorithms are designed for the pairwise linear Gaussian system that takes into account supplementary cross-dependences between observations and hidden states w.r.t. the conventional linear system, while still allowing optimal filtering by means of a Kalman-like filter. The algorithm is made robust through QR decompositions and the propagation of a square-root of the covariance matrices instead of the matrices themselves. It is assessed through a series of experiments that compare the algorithms which incorporate or not partial knowledge, for short as well as for long signals

    Constrained expectation maximisation algorithm for estimating ARMA models in state space representation

    Get PDF
    This paper discusses the fitting of linear state space models to given multivariate time series in the presence of constraints imposed on the four main parameter matrices of these models. Constraints arise partly from the assumption that the models have a block-diagonal structure, with each block corresponding to an ARMA process, that allows the reconstruction of independent source components from linear mixtures, and partly from the need to keep models identifiable. The first stage of parameter fitting is performed by the expectation maximisation (EM) algorithm. Due to the identifiability constraint, a subset of the diagonal elements of the dynamical noise covariance matrix needs to be constrained to fixed values (usually unity). For this kind of constraints, so far, no closed-form update rules were available. We present new update rules for this situation, both for updating the dynamical noise covariance matrix directly and for updating a matrix square-root of this matrix. The practical applicability of the proposed algorithm is demonstrated by a low-dimensional simulation example. The behaviour of the EM algorithm, as observed in this example, illustrates the well-known fact that in practical applications, the EM algorithm should be combined with a different algorithm for numerical optimisation, such as a quasi-Newton algorithm

    Quality-driven and real-time iris recognition from close-up eye videos

    No full text
    International audienceThis paper deals with the computation of robust iris templates from video sequences. The main contribution is to propose (i) optimal tracking and robust detection of the pupil, (ii) smart selection of iris images to be enrolled, and (iii) multi-thread and quality-driven decomposition of tasks to reach real-time processing. The evaluation of the system was done on the Multiple Biometric Grand Challenge dataset. Especially we conducted a systematic study regarding the fragile bit rate and the number of merged images, using classical criteria. We reached an equal error rate value of 0.2% which reflects high performance on this database with respect to previous studies

    Health monitoring of civil infrastructures by subspace system identification method: an overview

    Get PDF
    Structural health monitoring (SHM) is the main contributor of the future's smart city to deal with the need for safety, lower maintenance costs, and reliable condition assessment of structures. Among the algorithms used for SHM to identify the system parameters of structures, subspace system identification (SSI) is a reliable method in the time-domain that takes advantages of using extended observability matrices. Considerable numbers of studies have specifically concentrated on practical applications of SSI in recent years. To the best of author's knowledge, no study has been undertaken to review and investigate the application of SSI in the monitoring of civil engineering structures. This paper aims to review studies that have used the SSI algorithm for the damage identification and modal analysis of structures. The fundamental focus is on data-driven and covariance-driven SSI algorithms. In this review, we consider the subspace algorithm to resolve the problem of a real-world application for SHM. With regard to performance, a comparison between SSI and other methods is provided in order to investigate its advantages and disadvantages. The applied methods of SHM in civil engineering structures are categorized into three classes, from simple one-dimensional (1D) to very complex structures, and the detectability of the SSI for different damage scenarios are reported. Finally, the available software incorporating SSI as their system identification technique are investigated

    Apprentissage non-supervisé dans les modèles linéaires gaussiens. Application à la biométrie dynamique de l’iris

    Get PDF
    The family of Kalman filter model allows to estimate the states of a dynamical system from a set of observations. Despite a simple model, these filters are used in a large field of applications: Radar, vision and communications. The success is mainly based on the existence of exact smoothing or filtering algorithms, \ie linear to the number of observations and which minimize the mean square error.In this thesis, we are concerned about the pairwise Kalman filter. This filter adds from the orignal model, new interactions between hidden states and obervations while keeping exact algorithms in the case of linear and Gaussian models. We studied particularly the problem of the unsupervised and robust estimation of a pairwise Kalman filter parameters from a limited set of observations. The manuscript describes several learning algorithms by the estimation of the likelihood maximum according to EM (\textit{Expectation-Maximization}) principle. These original algorithms allow to embed a-priori constraints on studied system parameters, like a knowledge about physical or sensors. These constrained systems reduce the ambiguity, linked to identifiability issue of the pairwise Kalman filter during the parameter estimation. They allow also to limit the number of local maxima of likelihood function with the reduction of the dimension of search space and avoid sometime the trapping of EM algorithm.It is important to note that all proposed algorithm of this manuscrit can be applied to the original Kalman filter, as a particular pairwise Kalman filter. All algorithm are made robust by the propagation of square root matrices instead of the covariance matrices, which allows to limit the numerical issues, linked to the loses of symetry or positivity of these matrices. These algorithm are finally evaluated and compared in the case of an iris biometry application from video sequences. Pupil tracking is used to enroll and recognize in real-time a person thanks to its iris-code.La famille de modèles dite des filtres de Kalman permet d'estimer les états d'un système dynamique à partir d'une série de mesures incomplètes ou bruitées. Malgré leur relative simplicité de modélisation, ces filtres sont utilisés dans un large spectre scientifique dont le radar, la vision, et les communications. Ce succès repose, pour l'essentiel, sur l'existence d'algorithmes de filtrage et de lissage exacts et rapides, \ie linéaires au nombre d'observations, qui minimisent l'erreur quadratique moyenne. Dans cette thèse, nous nous sommes intéressés au filtre de Kalman couple. Celui-ci intègre, par rapport au modèle original, de nouvelles possibilités d'interactions entre états cachés et observations, tout en conservant des algorithmes exacts et rapides dans le cas linéaire et gaussien. Nous étudions plus particulièrement le problème de l'estimation non supervisée et robuste des paramètres d'un filtre de Kalman couple à partir d'observations en nombre limité. Le manuscrit décrit ainsi plusieurs algorithmes d'apprentissage par estimation du maximum de vraisemblance selon le principe EM (\textit{Expectation-Maximization}). Ces algorithmes originaux permettent d'intégrer des contraintes a priori sur les paramètres du système étudié, comme expressions de connaissances partielles sur la physique de l'application ou sur le capteur. Ces systèmes contraints réduisent l'ambiguïté liée au problème d'identifiabilité du filtre de Kalman couple lors de l'estimation des paramètres. Ils permettent également de limiter le nombre de maxima locaux de la fonction de vraisemblance en réduisant la dimension de l'espace de recherche, et ainsi évitent parfois le piégeage de l'algorithme EM. Il est important de noter que l'ensemble des algorithmes proposés dans ce manuscrit s'applique directement au filtre de Kalman original, comme cas particulier du filtre de Kalman couple. Tous les algorithmes sont rendus robustes par la propagation systématique de racines-carrés des matrices de covariance au lieu des matrices de covariance elles-mêmes, permettant ainsi d'éviter les difficultés numériques bien connues liées à la perte de positivité et de symétrie de ces matrices. Ces algorithmes robustes sont finalement évalués et comparés dans le cadre d'une application de biométrie de l'iris à partir de vidéos. Le suivi de la pupille est exploitée pour enrôler et identifier en temps-réel une personne grâce à son iris-code

    Linear dimensionality reduction: Survey, insights, and generalizations

    Get PDF
    Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.JPC and ZG received funding from the UK Engineering and Physical Sciences Research Council (EPSRC EP/H019472/1). JPC received funding from a Sloan Research Fellowship, the Simons Foundation (SCGB#325171 and SCGB#325233), the Grossman Center at Columbia University, and the Gatsby Charitable Trust.This is the author accepted manuscript. The final version is available from MIT Press via http://jmlr.org/papers/v16/cunningham15a.htm

    Development of a Novel Dataset and Tools for Non-Invasive Fetal Electrocardiography Research

    Get PDF
    This PhD thesis presents the development of a novel open multi-modal dataset for advanced studies on fetal cardiological assessment, along with a set of signal processing tools for its exploitation. The Non-Invasive Fetal Electrocardiography (ECG) Analysis (NInFEA) dataset features multi-channel electrophysiological recordings characterized by high sampling frequency and digital resolution, maternal respiration signal, synchronized fetal trans-abdominal pulsed-wave Doppler (PWD) recordings and clinical annotations provided by expert clinicians at the time of the signal collection. To the best of our knowledge, there are no similar dataset available. The signal processing tools targeted both the PWD and the non-invasive fetal ECG, exploiting the recorded dataset. About the former, the study focuses on the processing aimed at the preparation of the signal for the automatic measurement of relevant morphological features, already adopted in the clinical practice for cardiac assessment. To this aim, a relevant step is the automatic identification of the complete and measurable cardiac cycles in the PWD videos: a rigorous methodology was deployed for the analysis of the different processing steps involved in the automatic delineation of the PWD envelope, then implementing different approaches for the supervised classification of the cardiac cycles, discriminating between complete and measurable vs. malformed or incomplete ones. Finally, preliminary measurement algorithms were also developed in order to extract clinically relevant parameters from the PWD. About the fetal ECG, this thesis concentrated on the systematic analysis of the adaptive filters performance for non-invasive fetal ECG extraction processing, identified as the reference tool throughout the thesis. Then, two studies are reported: one on the wavelet-based denoising of the extracted fetal ECG and another one on the fetal ECG quality assessment from the analysis of the raw abdominal recordings. Overall, the thesis represents an important milestone in the field, by promoting the open-data approach and introducing automated analysis tools that could be easily integrated in future medical devices

    Operational modal analysis - Theory and aspects of application in civil engineering

    Get PDF
    In recent years the demand on dynamic analyses of existing structures in civil engineering has remarkably increased. These analyses are mainly based on numerical models. Accordingly, the generated results depend on the quality of the used models. Therefore it is very important that the models describe the considered systems such that the behaviour of the physical structure is realistically represented. As any model is based on assumptions, there is always a certain degree of uncertainty present in the results of a simulation based on the respective numerical model. To minimise these uncertainties in the prediction of the response of a structure to a certain loading, it has become common practice to update or calibrate the parameters of a numerical model based on observations of the structural behaviour of the respective existing system. The determination of the behaviour of an existing structure requires experimental investigations. If the numerical analyses concern the dynamic response of a structure it is sensible to direct the experimental investigations towards the identification of the dynamic structural behaviour which is determined by the modal parameters of the system. In consequence, several methods for the experimental identification of modal parameters have been developed since the 1980ies. Due to various technical restraints in civil engineering which limit the possibilities to excitate a structure with economically reasonable effort, several methods have been developed that allow a modal identification form tests with an ambient excitation. The approach of identifying modal parameters only from measurements of the structural response without precise knowledge of the excitation is known as output-only or operational modal analysis. Since operational modal analysis (OMA) can be considered as a link between numerical modelling and simulation on the one hand and the dynamic behaviour of an existing structure on the other hand, the respective algorithms connect both the concepts of structural dynamics and mathematical tools applied within the processing of experimental data. Accordingly, the related theoretical topics are revised after an introduction into the topic. Several OMA methods have been developed over the last decades. The most established algorithms are presented here and their application is illustrated by means of both a small numerical and an experimental example. Since experimentally obtained results always underly manifold influences, an appropriate postprocessing of the results is necessary for a respective quality assessment. This quality assessment does not only require respective indicators but should also include the quantification of uncertainties. One special feature in modal testing is that it is common to instrument the structure in different sensor setups to improve the spacial resolution of identified mode shapes. The modal information identified from tests in several setups needs to be merged a posteriori. Algorithms to cope with this problem are also presented. Due to the fact that the amount of data generated in modal tests can become very large, manual processing can become extremely expensive or even impossible, for example in the case of a long-term continuous structural monitoring. In these situations an automated analysis and postprocessing are essential. Descriptions of respective methodologies are therefore also included in this work. Every structural system in civil engineering is unique and so also every identification of modal parameters has its specific challenges. Some aspects that can be faced in practical applications of operational modal analysis are presented and discussed in a chapter that is dedicated specific problems that an analyst may have to overcome. Case studies of systems with very close modes, with limited accessibility as well as the application of different OMA methods are described and discussed. In this context the focus is put on several types of uncertainty that may occur in the multiple stages of an operational modal analysis. In literature only very specific uncertainties at certain stages of the analysis are addressed. Here, the topic of uncertainties has been considered in a broader sense and approaches for treating respective problems are suggested. Eventually, it is concluded that the methodologies of operatinal modal analysis and related technical solutions have been well-engineered already. However, as in any discipline that includes experiments, a certain degree of uncertainty always remains in the results. From these conclusions has been derived a demand for further research and development that should be directed towards the minimisation of these uncertainties and to a respective optimisation of the steps and corresponding parameters included in an operational modal analysis
    corecore