933,660 research outputs found

    Homometric Point Sets and Inverse Problems

    Get PDF
    The inverse problem of diffraction theory in essence amounts to the reconstruction of the atomic positions of a solid from its diffraction image. From a mathematical perspective, this is a notoriously difficult problem, even in the idealised situation of perfect diffraction from an infinite structure. Here, the problem is analysed via the autocorrelation measure of the underlying point set, where two point sets are called homometric when they share the same autocorrelation. For the class of mathematical quasicrystals within a given cut and project scheme, the homometry problem becomes equivalent to Matheron's covariogram problem, in the sense of determining the window from its covariogram. Although certain uniqueness results are known for convex windows, interesting examples of distinct homometric model sets already emerge in the plane. The uncertainty level increases in the presence of diffuse scattering. Already in one dimension, a mixed spectrum can be compatible with structures of different entropy. We expand on this example by constructing a family of mixed systems with fixed diffraction image but varying entropy. We also outline how this generalises to higher dimension.Comment: 8 page

    Poisson inverse problems

    Get PDF
    In this paper we focus on nonparametric estimators in inverse problems for Poisson processes involving the use of wavelet decompositions. Adopting an adaptive wavelet Galerkin discretization, we find that our method combines the well-known theoretical advantages of wavelet--vaguelette decompositions for inverse problems in terms of optimally adapting to the unknown smoothness of the solution, together with the remarkably simple closed-form expressions of Galerkin inversion methods. Adapting the results of Barron and Sheu [Ann. Statist. 19 (1991) 1347--1369] to the context of log-intensity functions approximated by wavelet series with the use of the Kullback--Leibler distance between two point processes, we also present an asymptotic analysis of convergence rates that justifies our approach. In order to shed some light on the theoretical results obtained and to examine the accuracy of our estimates in finite samples, we illustrate our method by the analysis of some simulated examples.Comment: Published at http://dx.doi.org/10.1214/009053606000000687 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Invisibility and Inverse Problems

    Full text link
    This survey of recent developments in cloaking and transformation optics is an expanded version of the lecture by Gunther Uhlmann at the 2008 Annual Meeting of the American Mathematical Society.Comment: 68 pages, 12 figures. To appear in the Bulletin of the AM

    Bayesian inference for inverse problems

    Get PDF
    Traditionally, the MaxEnt workshops start by a tutorial day. This paper summarizes my talk during 2001'th workshop at John Hopkins University. The main idea in this talk is to show how the Bayesian inference can naturally give us all the necessary tools we need to solve real inverse problems: starting by simple inversion where we assume to know exactly the forward model and all the input model parameters up to more realistic advanced problems of myopic or blind inversion where we may be uncertain about the forward model and we may have noisy data. Starting by an introduction to inverse problems through a few examples and explaining their ill posedness nature, I briefly presented the main classical deterministic methods such as data matching and classical regularization methods to show their limitations. I then presented the main classical probabilistic methods based on likelihood, information theory and maximum entropy and the Bayesian inference framework for such problems. I show that the Bayesian framework, not only generalizes all these methods, but also gives us natural tools, for example, for inferring the uncertainty of the computed solutions, for the estimation of the hyperparameters or for handling myopic or blind inversion problems. Finally, through a deconvolution problem example, I presented a few state of the art methods based on Bayesian inference particularly designed for some of the mass spectrometry data processing problems.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum Entropy Methods, B. Fry (Ed.), AIP Proceedings. 20pages, 13 Postscript figure

    Inverse zero-sum problems II

    Full text link
    Let GG be an additive finite abelian group. A sequence over GG is called a minimal zero-sum sequence if the sum of its terms is zero and no proper subsequence has this property. Davenport's constant of GG is the maximum of the lengths of the minimal zero-sum sequences over GG. Its value is well-known for groups of rank two. We investigate the structure of minimal zero-sum sequences of maximal length for groups of rank two. Assuming a well-supported conjecture on this problem for groups of the form CmCmC_m \oplus C_m, we determine the structure of these sequences for groups of rank two. Combining our result and partial results on this conjecture, yields unconditional results for certain groups of rank two.Comment: new version contains results related to Davenport's constant only; other results will be described separatel

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page
    corecore