50,361 research outputs found

    Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems

    Get PDF
    This paper deals with the resolution of inverse problems in a periodic setting or, in other terms, the reconstruction of periodic continuous-domain signals from their noisy measurements. We focus on two reconstruction paradigms: variational and statistical. In the variational approach, the reconstructed signal is solution to an optimization problem that establishes a tradeoff between fidelity to the data and smoothness conditions via a quadratic regularization associated to a linear operator. In the statistical approach, the signal is modeled as a stationary random process defined from a Gaussian white noise and a whitening operator; one then looks for the optimal estimator in the mean-square sense. We give a generic form of the reconstructed signals for both approaches, allowing for a rigorous comparison of the two.We fully characterize the conditions under which the two formulations yield the same solution, which is a periodic spline in the case of sampling measurements. We also show that this equivalence between the two approaches remains valid on simulations for a broad class of problems. This extends the practical range of applicability of the variational method

    Spatial multi-level interacting particle simulations and information theory-based error quantification

    Get PDF
    We propose a hierarchy of multi-level kinetic Monte Carlo methods for sampling high-dimensional, stochastic lattice particle dynamics with complex interactions. The method is based on the efficient coupling of different spatial resolution levels, taking advantage of the low sampling cost in a coarse space and by developing local reconstruction strategies from coarse-grained dynamics. Microscopic reconstruction corrects possibly significant errors introduced through coarse-graining, leading to the controlled-error approximation of the sampled stochastic process. In this manner, the proposed multi-level algorithm overcomes known shortcomings of coarse-graining of particle systems with complex interactions such as combined long and short-range particle interactions and/or complex lattice geometries. Specifically, we provide error analysis for the approximation of long-time stationary dynamics in terms of relative entropy and prove that information loss in the multi-level methods is growing linearly in time, which in turn implies that an appropriate observable in the stationary regime is the information loss of the path measures per unit time. We show that this observable can be either estimated a priori, or it can be tracked computationally a posteriori in the course of a simulation. The stationary regime is of critical importance to molecular simulations as it is relevant to long-time sampling, obtaining phase diagrams and in studying metastability properties of high-dimensional complex systems. Finally, the multi-level nature of the method provides flexibility in combining rejection-free and null-event implementations, generating a hierarchy of algorithms with an adjustable number of rejections that includes well-known rejection-free and null-event algorithms.Comment: 34 page

    ENSO dynamics: low-dimensional-chaotic or stochastic?

    Get PDF
    We apply a test for low-dimensional, deterministic dynamics to the Nino 3 time series for the El Nino Southern Oscillation (ENSO). The test is negative, indicating that the dynamics is high-dimensional/stochastic. However, application of stochastic forcing to a time-delay equation for equatorial-wave dynamics can reproduce this stochastic dynamics and other important aspects of ENSO. Without such stochastic forcing this model yields low-dimensional, deterministic dynamics, hence these results emphasize the importance of the stochastic nature of the atmosphere-ocean interaction in low-dimensional models of ENSO

    Fourier Analysis of Stochastic Sampling Strategies for Assessing Bias and Variance in Integration

    Get PDF

    Coupled coarse graining and Markov Chain Monte Carlo for lattice systems

    Get PDF
    We propose an efficient Markov Chain Monte Carlo method for sampling equilibrium distributions for stochastic lattice models, capable of handling correctly long and short-range particle interactions. The proposed method is a Metropolis-type algorithm with the proposal probability transition matrix based on the coarse-grained approximating measures introduced in a series of works of M. Katsoulakis, A. Majda, D. Vlachos and P. Plechac, L. Rey-Bellet and D.Tsagkarogiannis,. We prove that the proposed algorithm reduces the computational cost due to energy differences and has comparable mixing properties with the classical microscopic Metropolis algorithm, controlled by the level of coarsening and reconstruction procedure. The properties and effectiveness of the algorithm are demonstrated with an exactly solvable example of a one dimensional Ising-type model, comparing efficiency of the single spin-flip Metropolis dynamics and the proposed coupled Metropolis algorithm.Comment: 20 pages, 4 figure

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor
    corecore