31,645 research outputs found

    Solution of the inverse scattering problem by T-matrix completion. II. Simulations

    Full text link
    This is Part II of the paper series on data-compatible T-matrix completion (DCTMC), which is a method for solving nonlinear inverse problems. Part I of the series contains theory and here we present simulations for inverse scattering of scalar waves. The underlying mathematical model is the scalar wave equation and the object function that is reconstructed is the medium susceptibility. The simulations are relevant to ultrasound tomographic imaging and seismic tomography. It is shown that DCTMC is a viable method for solving strongly nonlinear inverse problems with large data sets. It provides not only the overall shape of the object but the quantitative contrast, which can correspond, for instance, to the variable speed of sound in the imaged medium.Comment: This is Part II of a paper series. Part I contains theory and is available at arXiv:1401.3319 [math-ph]. Accepted in this form to Phys. Rev.

    Barrier Frank-Wolfe for Marginal Inference

    Full text link
    We introduce a globally-convergent algorithm for optimizing the tree-reweighted (TRW) variational objective over the marginal polytope. The algorithm is based on the conditional gradient method (Frank-Wolfe) and moves pseudomarginals within the marginal polytope through repeated maximum a posteriori (MAP) calls. This modular structure enables us to leverage black-box MAP solvers (both exact and approximate) for variational inference, and obtains more accurate results than tree-reweighted algorithms that optimize over the local consistency relaxation. Theoretically, we bound the sub-optimality for the proposed algorithm despite the TRW objective having unbounded gradients at the boundary of the marginal polytope. Empirically, we demonstrate the increased quality of results found by tightening the relaxation over the marginal polytope as well as the spanning tree polytope on synthetic and real-world instances.Comment: 25 pages, 12 figures, To appear in Neural Information Processing Systems (NIPS) 2015, Corrected reference and cleaned up bibliograph

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor

    How to mesh up Ewald sums (I): A theoretical and numerical comparison of various particle mesh routines

    Full text link
    Standard Ewald sums, which calculate e.g. the electrostatic energy or the force in periodically closed systems of charged particles, can be efficiently speeded up by the use of the Fast Fourier Transformation (FFT). In this article we investigate three algorithms for the FFT-accelerated Ewald sum, which attracted a widespread attention, namely, the so-called particle-particle-particle-mesh (P3M), particle mesh Ewald (PME) and smooth PME method. We present a unified view of the underlying techniques and the various ingredients which comprise those routines. Additionally, we offer detailed accuracy measurements, which shed some light on the influence of several tuning parameters and also show that the existing methods -- although similar in spirit -- exhibit remarkable differences in accuracy. We propose combinations of the individual components, mostly relying on the P3M approach, which we regard as most flexible.Comment: 18 pages, 8 figures included, revtex styl
    • …
    corecore