63 research outputs found

    A Framework for Analyzing Online Cross-correlators using Price's Theorem and Piecewise-Linear Decomposition

    Full text link
    Precise estimation of cross-correlation or similarity between two random variables lies at the heart of signal detection, hyperdimensional computing, associative memories, and neural networks. Although a vast literature exists on different methods for estimating cross-correlations, the question what is the best and simplest method to estimate cross-correlations using finite samples ? is still not clear. In this paper, we first argue that the standard empirical approach might not be the optimal method even though the estimator exhibits uniform convergence to the true cross-correlation. Instead, we show that there exists a large class of simple non-linear functions that can be used to construct cross-correlators with a higher signal-to-noise ratio (SNR). To demonstrate this, we first present a general mathematical framework using Price's Theorem that allows us to analyze cross-correlators constructed using a mixture of piece-wise linear functions. Using this framework and high-dimensional embedding, we show that some of the most promising cross-correlators are based on Huber's loss functions, margin-propagation (MP) functions, and the log-sum-exp functions.Comment: 9 figure, 13 page

    Conditional Transformation Models

    Full text link
    The ultimate goal of regression analysis is to obtain information about the conditional distribution of a response given a set of explanatory variables. This goal is, however, seldom achieved because most established regression models only estimate the conditional mean as a function of the explanatory variables and assume that higher moments are not affected by the regressors. The underlying reason for such a restriction is the assumption of additivity of signal and noise. We propose to relax this common assumption in the framework of transformation models. The novel class of semiparametric regression models proposed herein allows transformation functions to depend on explanatory variables. These transformation functions are estimated by regularised optimisation of scoring rules for probabilistic forecasts, e.g. the continuous ranked probability score. The corresponding estimated conditional distribution functions are consistent. Conditional transformation models are potentially useful for describing possible heteroscedasticity, comparing spatially varying distributions, identifying extreme events, deriving prediction intervals and selecting variables beyond mean regression effects. An empirical investigation based on a heteroscedastic varying coefficient simulation model demonstrates that semiparametric estimation of conditional distribution functions can be more beneficial than kernel-based non-parametric approaches or parametric generalised additive models for location, scale and shape

    Probabilistic Methods for Model Validation

    Get PDF
    This dissertation develops a probabilistic method for validation and verification (V&V) of uncertain nonlinear systems. Existing systems-control literature on model and controller V&V either deal with linear systems with norm-bounded uncertainties,or consider nonlinear systems in set-based and moment based framework. These existing methods deal with model invalidation or falsification, rather than assessing the quality of a model with respect to measured data. In this dissertation, an axiomatic framework for model validation is proposed in probabilistically relaxed sense, that instead of simply invalidating a model, seeks to quantify the "degree of validation". To develop this framework, novel algorithms for uncertainty propagation have been proposed for both deterministic and stochastic nonlinear systems in continuous time. For the deterministic flow, we compute the time-varying joint probability density functions over the state space, by solving the Liouville equation via method-of-characteristics. For the stochastic flow, we propose an approximation algorithm that combines the method-of-characteristics solution of Liouville equation with the Karhunen-Lo eve expansion of process noise, thus enabling an indirect solution of Fokker-Planck equation, governing the evolution of joint probability density functions. The efficacy of these algorithms are demonstrated for risk assessment in Mars entry-descent-landing, and for nonlinear estimation. Next, the V&V problem is formulated in terms of Monge-Kantorovich optimal transport, naturally giving rise to a metric, called Wasserstein metric, on the space of probability densities. It is shown that the resulting computation leads to solving a linear program at each time of measurement availability, and computational complexity results for the same are derived. Probabilistic guarantees in average and worst case sense, are given for the validation oracle resulting from the proposed method. The framework is demonstrated for nonlinear robustness veri cation of F-16 flight controllers, subject to probabilistic uncertainties. Frequency domain interpretations for the proposed framework are derived for linear systems, and its connections with existing nonlinear model validation methods are pointed out. In particular, we show that the asymptotic Wasserstein gap between two single-output linear time invariant systems excited by Gaussian white noise, is the difference between their average gains, up to a scaling by the strength of the input noise. A geometric interpretation of this result allows us to propose an intrinsic normalization of the Wasserstein gap, which in turn allows us to compare it with classical systems-theoretic metrics like v-gap. Next, it is shown that the optimal transport map can be used to automatically refine the model. This model refinement formulation leads to solving a non-smooth convex optimization problem. Examples are given to demonstrate how proximal operator splitting based computation enables numerically solving the same. This method is applied for nite-time feedback control of probability density functions, and for data driven modeling of dynamical systems

    Graph-based Estimation of Information Divergence Functions

    Get PDF
    abstract: Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric model fits the data, non-parametric density estimation is used. In statistical signal processing applications, Gaussianity is usually assumed since closed-form expressions for common divergence measures have been derived for this family of distributions. Parametric assumptions are preferred when it is known that the data follows the model, however this is rarely the case in real-word scenarios. Non-parametric density estimators are characterized by a very large number of parameters that have to be tuned with costly cross-validation. In this dissertation we focus on a specific family of non-parametric estimators, called direct estimators, that bypass density estimation completely and directly estimate the quantity of interest from the data. We introduce a new divergence measure, the DpD_p-divergence, that can be estimated directly from samples without parametric assumptions on the distribution. We show that the DpD_p-divergence bounds the binary, cross-domain, and multi-class Bayes error rates and, in certain cases, provides provably tighter bounds than the Hellinger divergence. In addition, we also propose a new methodology that allows the experimenter to construct direct estimators for existing divergence measures or to construct new divergence measures with custom properties that are tailored to the application. To examine the practical efficacy of these new methods, we evaluate them in a statistical learning framework on a series of real-world data science problems involving speech-based monitoring of neuro-motor disorders.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Zero-delay source-channel coding

    Get PDF
    In this thesis, we investigate the zero-delay transmission of source samples over three different types of communication channel models. First, we consider the zero-delay transmission of a Gaussian source sample over an additive white Gaussian noise (AWGN) channel in the presence of an additive white Gaussian (AWG) interference, which is fully known by the transmitter. We propose three parameterized linear and non-linear transmission schemes for this scenario, and compare the corresponding mean square error (MSE) performances with that of a numerically optimized encoder, obtained using the necessary optimality conditions. Next, we consider the zero-delay transmission of a Gaussian source sample over an AWGN channel with a one-bit analog-to-digital (ADC) front end. We study this problem under two different performance criteria, namely the MSE distortion and the distortion outage probability (DOP), and obtain the optimal encoder and the decoder for both criteria. As generalizations of this scenario, we consider the performance with a K-level ADC front end as well as with multiple one-bit ADC front ends. We derive necessary conditions for the optimal encoder and decoder, which are then used to obtain numerically optimized encoder and decoder mappings. Finally, we consider the transmission of a Gaussian source sample over an AWGN channel with a one-bit ADC front end in the presence of correlated side information at the receiver. Again, we derive the necessary optimality conditions, and using these conditions obtain numerically optimized encoder and decoder mappings. We also consider the scenario in which the side information is available also at the encoder, and obtain the optimal encoder and decoder mappings. The performance of the latter scenario serves as a lower bound on the performance of the case in which the side information is available only at the decoder.Open Acces

    Covariance mapping techniques

    Get PDF
    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions

    Fusion of Imaging and Inertial Sensors for Navigation

    Get PDF
    The motivation of this research is to address the limitations of satellite-based navigation by fusing imaging and inertial systems. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer- grade inertial sensors. While limitations of the Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions
    • …
    corecore