3,567 research outputs found

    Full State History Cooperative Localisation with Complete Information Sharing

    Get PDF
    This thesis presents a decentralised localisation method for multiple robots. We enable reduced bandwidth requirements whilst using local solutions that fuse information from other robots. This method does not specify a communication topology or require complex tracking of information. The methods for including shared data match standard elements of nonlinear optimisation algorithms. There are four contributions in this thesis. The first is a method to split the multiple vehicle problem into sections that can be iteratively transmitted in packets with bandwidth bounds. This is done through delayed elimination of external states, which are states involved in intervehicle observations. Observations are placed in subgraphs that accumulate between external states. Internal states, which are all states not involved in intervehicle observations, can then be eliminated from each subgraph and the joint probability of the start and end states is shared between vehicles and combined to yield the solution to the entire graph. The second contribution is usage of variable reordering within these packets to enable handling of delayed observations that target an existing state such as with visual loop closures. We identify the calculations required to give the conditional probability of the delayed historical state on the existing external states before and after. This reduces the recalculation to updating the factorisation of a single subgraph and is independent of the time since the observation was made. The third contribution is a method and conditions for insertion of states into existing packets that does not invalidate previously transmitted data. We derive the conditions that enable this method and our fourth contribution is two motion models that conform to the conditions. Together this permits handling of the general out of sequence case

    Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation

    Full text link
    Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.Comment: 5 pages, 5 figure

    Compressed sensing for wide-field radio interferometric imaging

    Full text link
    For the next generation of radio interferometric telescopes it is of paramount importance to incorporate wide field-of-view (WFOV) considerations in interferometric imaging, otherwise the fidelity of reconstructed images will suffer greatly. We extend compressed sensing techniques for interferometric imaging to a WFOV and recover images in the spherical coordinate space in which they naturally live, eliminating any distorting projection. The effectiveness of the spread spectrum phenomenon, highlighted recently by one of the authors, is enhanced when going to a WFOV, while sparsity is promoted by recovering images directly on the sphere. Both of these properties act to improve the quality of reconstructed interferometric images. We quantify the performance of compressed sensing reconstruction techniques through simulations, highlighting the superior reconstruction quality achieved by recovering interferometric images directly on the sphere rather than the plane.Comment: 15 pages, 8 figures, replaced to match version accepted by MNRA

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore