231 research outputs found

    A Full Scale Camera Calibration Technique with Automatic Model Selection – Extension and Validation

    Get PDF
    This thesis presents work on the testing and development of a complete camera calibration approach which can be applied to a wide range of cameras equipped with normal, wide-angle, fish-eye, or telephoto lenses. The full scale calibration approach estimates all of the intrinsic and extrinsic parameters. The calibration procedure is simple and does not require prior knowledge of any parameters. The method uses a simple planar calibration pattern. Closed-form estimates for the intrinsic and extrinsic parameters are computed followed by nonlinear optimization. Polynomial functions are used to describe the lens projection instead of the commonly used radial model. Statistical information criteria are used to automatically determine the complexity of the lens distortion model. In the first stage experiments were performed to verify and compare the performance of the calibration method. Experiments were performed on a wide range of lenses. Synthetic data was used to simulate real data and validate the performance. Synthetic data was also used to validate the performance of the distortion model selection which uses Information Theoretic Criterion (AIC) to automatically select the complexity of the distortion model. In the second stage work was done to develop an improved calibration procedure which addresses shortcomings of previously developed method. Experiments on the previous method revealed that the estimation of the principal point during calibration was erroneous for lenses with a large focal length. To address this issue the calibration method was modified to include additional methods to accurately estimate the principal point in the initial stages of the calibration procedure. The modified procedure can now be used to calibrate a wide spectrum of imaging systems including telephoto and verifocal lenses. Survey of current work revealed a vast amount of research concentrating on calibrating only the distortion of the camera. In these methods researchers propose methods to calibrate only the distortion parameters and suggest using other popular methods to find the remaining camera parameters. Using this proposed methodology we apply distortion calibration to our methods to separate the estimation of distortion parameters. We show and compare the results with the original method on a wide range of imaging systems

    Microprocessor Implementation of Autoregressive Analysis of Process Sensor Signals

    Get PDF
    Automated signal analysis can help for effective system surveillance and also to analyze the dynamic behavior of the system such as impulse response, step response etc. Autoregressive analysis is a parametric technique widely used for system surveillance and diagnosis. The main aim objective of this research work is to develop an embedded system for autoregressive analysis of sensor signals in an online fashion for monitoring system parameters. This thesis presents the algorithm, data representation and performance of the optimized microprocessor implementation of autoregressive analysis. In this work an autoregressive (AR) model is generated as a solution to a linear system of equations called Yule-Walker linear equations. The generated model is then implemented on Motorola PowerPC MPC555 processor. The embedded software for autoregressive analysis is written in the C programming language using fixed point arithmetic. It includes estimation of the autoregressive parameters, estimation of the noise variance recursively using the AR parameters, determination of the optimal model order and the model validation

    Multi-Step Knowledge-Aided Iterative ESPRIT for Direction Finding

    Full text link
    In this work, we propose a subspace-based algorithm for DOA estimation which iteratively reduces the disturbance factors of the estimated data covariance matrix and incorporates prior knowledge which is gradually obtained on line. An analysis of the MSE of the reshaped data covariance matrix is carried out along with comparisons between computational complexities of the proposed and existing algorithms. Simulations focusing on closely-spaced sources, where they are uncorrelated and correlated, illustrate the improvements achieved.Comment: 7 figures. arXiv admin note: text overlap with arXiv:1703.1052

    Generative Models Based on the Bounded Asymmetric Student’s t-Distribution

    Get PDF
    Gaussian mixture models (GMMs) are a very useful and widely popular approach for clustering, but they have several limitations, such as low outliers tolerance and assumption of data normality. Another problem in relation to finite mixture models in general is the inference of an optimal number of mixture components. An excellent approach to solve this problem is model selection, which is the process of choosing the optimal number of mixture components that ensures the best clustering performance. In this thesis, we attempt to tackle both aforementioned issues: we propose using minimum message length (MML) as a model selection criterion for multivariate bounded asymmetric Student’s t-mixture model (BASMM). In fact, BASMM is chosen as an alternative to improve the GMM’s limitations, as it provides a better fit for the real-world data irregularities. We formulate the definition of MML and the BASMM, and we test their performance through multiple experiments with different problem settings. Hidden Markov models (HMMs) are popular methods for continuous sequential data modeling and classification tasks. In such applications, the observation emission densities of the HMM hidden states are typically modeled by elliptically contoured distributions, namely Gaussians or Student’s t-distributions. In this context, this thesis proposes BAMMHMM: a novel HMM with Bounded Asymmetric Student’s t-Mixture Model (BASMM) emissions. This HMM is destined to sufficiently fit skewed and outlier-heavy observations, which are typical in many fields, such as financial or signal processing-related datasets. We demonstrate the improved robustness of our model by presenting the results of different real-world applications

    Information-Theoretic Causal Discovery

    Get PDF
    It is well-known that correlation does not equal causation, but how can we infer causal relations from data? Causal discovery tries to answer precisely this question by rigorously analyzing under which assumptions it is feasible to infer causal networks from passively collected, so-called observational data. Particularly, causal discovery aims to infer a directed graph among a set of observed random variables under assumptions which are as realistic as possible. A key assumption in causal discovery is faithfulness. That is, we assume that separations in the true graph imply independencies in the distribution and vice versa. If faithfulness holds and we have access to a perfect independence oracle, traditional causal discovery approaches can infer the Markov equivalence class of the true causal graph---i.e., infer the correct undirected network and even some of the edge directions. In a real-world setting, faithfulness may be violated, however, and neither do we have access to such an independence oracle. Beyond that, we are interested in inferring the complete DAG structure and not just the Markov equivalence class. To circumvent or at least alleviate these limitations, we take an information-theoretic approach. In the first part of this thesis, we consider violations of faithfulness that can be induced by exclusive or relations or cancelling paths, and develop a weaker faithfulness assumption, called 2-adjacency faithfulness, to detect some of these mechanisms. Further, we analyze under which conditions it is possible to infer the correct DAG structure even if such violations occur. In the second part, we focus on independence testing via conditional mutual information (CMI). CMI is an information-theoretic measure of dependence based on Shannon entropy. We first suggest estimating CMI for discrete variables via normalized maximum likelihood instead of the plug-in maximum likelihood estimator that tends to overestimate dependencies. On top of that, we show that CMI can be consistently estimated for discrete-continuous mixture random variables by simply discretizing the continuous parts of each variable. Last, we consider the problem of distinguishing the two Markov equivalent graphs X to Y and Y to X, which is a necessary step towards discovering all edge directions. To solve this problem, it is inevitable to make assumptions about the generating mechanism. We build upon the idea which states that the cause is algorithmically independent of its mechanism. We propose two methods to approximate this postulate via the Minimum Description Length (MDL) principle: one for univariate numeric data and one for multivariate mixed-type data. Finally, we combine insights from our MDL-based approach and regression-based methods with strong guarantees and show we can identify cause and effect via L0-regularized regression
    • …
    corecore