18,638 research outputs found

    Estimation of phase noise in oscillators with colored noise sources

    Get PDF
    In this letter we study the design of algorithms for estimation of phase noise (PN) with colored noise sources. A soft-input maximum a posteriori PN estimator and a modified soft-input extended Kalman smoother are proposed. The performance of the proposed algorithms are compared against those studied in the literature, in terms of mean square error of PN estimation, and symbol error rate of the considered communication system. The comparisons show that considerable performance gains can be achieved by designing estimators that employ correct knowledge of the PN statistics

    Warm-started wavefront reconstruction for adaptive optics

    Get PDF
    Future extreme adaptive optics (ExAO) systems have been suggested with up to 10^5 sensors and actuators. We analyze the computational speed of iterative reconstruction algorithms for such large systems. We compare a total of 15 different scalable methods, including multigrid, preconditioned conjugate-gradient, and several new variants of these. Simulations on a 128×128 square sensor/actuator geometry using Taylor frozen-flow dynamics are carried out using both open-loop and closed-loop measurements, and algorithms are compared on a basis of the mean squared error and floating-point multiplications required. We also investigate the use of warm starting, where the most recent estimate is used to initialize the iterative scheme. In open-loop estimation or pseudo-open-loop control, warm starting provides a significant computational speedup; almost every algorithm tested converges in one iteration. In a standard closed-loop implementation, using a single iteration per time step, most algorithms give the minimum error even in cold start, and every algorithm gives the minimum error if warm started. The best algorithm is therefore the one with the smallest computational cost per iteration, not necessarily the one with the best quasi-static performance

    On multi-view learning with additive models

    Get PDF
    In many scientific settings data can be naturally partitioned into variable groupings called views. Common examples include environmental (1st view) and genetic information (2nd view) in ecological applications, chemical (1st view) and biological (2nd view) data in drug discovery. Multi-view data also occur in text analysis and proteomics applications where one view consists of a graph with observations as the vertices and a weighted measure of pairwise similarity between observations as the edges. Further, in several of these applications the observations can be partitioned into two sets, one where the response is observed (labeled) and the other where the response is not (unlabeled). The problem for simultaneously addressing viewed data and incorporating unlabeled observations in training is referred to as multi-view transductive learning. In this work we introduce and study a comprehensive generalized fixed point additive modeling framework for multi-view transductive learning, where any view is represented by a linear smoother. The problem of view selection is discussed using a generalized Akaike Information Criterion, which provides an approach for testing the contribution of each view. An efficient implementation is provided for fitting these models with both backfitting and local-scoring type algorithms adjusted to semi-supervised graph-based learning. The proposed technique is assessed on both synthetic and real data sets and is shown to be competitive to state-of-the-art co-training and graph-based techniques.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS202 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models

    Full text link
    We propose a new class of filtering and smoothing methods for inference in high-dimensional, nonlinear, non-Gaussian, spatio-temporal state-space models. The main idea is to combine the ensemble Kalman filter and smoother, developed in the geophysics literature, with state-space algorithms from the statistics literature. Our algorithms address a variety of estimation scenarios, including on-line and off-line state and parameter estimation. We take a Bayesian perspective, for which the goal is to generate samples from the joint posterior distribution of states and parameters. The key benefit of our approach is the use of ensemble Kalman methods for dimension reduction, which allows inference for high-dimensional state vectors. We compare our methods to existing ones, including ensemble Kalman filters, particle filters, and particle MCMC. Using a real data example of cloud motion and data simulated under a number of nonlinear and non-Gaussian scenarios, we show that our approaches outperform these existing methods
    corecore