883 research outputs found

    Combining local regularity estimation and total variation optimization for scale-free texture segmentation

    Get PDF
    Texture segmentation constitutes a standard image processing task, crucial to many applications. The present contribution focuses on the particular subset of scale-free textures and its originality resides in the combination of three key ingredients: First, texture characterization relies on the concept of local regularity ; Second, estimation of local regularity is based on new multiscale quantities referred to as wavelet leaders ; Third, segmentation from local regularity faces a fundamental bias variance trade-off: In nature, local regularity estimation shows high variability that impairs the detection of changes, while a posteriori smoothing of regularity estimates precludes from locating correctly changes. Instead, the present contribution proposes several variational problem formulations based on total variation and proximal resolutions that effectively circumvent this trade-off. Estimation and segmentation performance for the proposed procedures are quantified and compared on synthetic as well as on real-world textures

    Adaptive Reconstruction for Electrical Impedance Tomography with a Piecewise Constant Conductivity

    Full text link
    In this work we propose and analyze a numerical method for electrical impedance tomography of recovering a piecewise constant conductivity from boundary voltage measurements. It is based on standard Tikhonov regularization with a Modica-Mortola penalty functional and adaptive mesh refinement using suitable a posteriori error estimators of residual type that involve the state, adjoint and variational inequality in the necessary optimality condition and a separate marking strategy. We prove the convergence of the adaptive algorithm in the following sense: the sequence of discrete solutions contains a subsequence convergent to a solution of the continuous necessary optimality system. Several numerical examples are presented to illustrate the convergence behavior of the algorithm.Comment: 26 pages, 12 figure

    Space adaptive and hierarchical Bayesian variational models for image restoration

    Get PDF
    The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters. More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach. In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Predictability, complexity and learning

    Full text link
    We define {\em predictive information} Ipred(T)I_{\rm pred} (T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times TT: Ipred(T)I_{\rm pred} (T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred(T)I_{\rm pred} (T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power--law growth is associated, for example, with the learning of infinite parameter (or nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and in the analysis of physical systems through statistical mechanics and dynamical systems theory. Further, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T)I_{\rm pred} (T) provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in different problems in physics, statistics, and biology.Comment: 53 pages, 3 figures, 98 references, LaTeX2

    Identifying Structure Transitions Using Machine Learning Methods

    Get PDF
    Methodologies from data science and machine learning, both new and old, provide an exciting opportunity to investigate physical systems using extremely expressive statistical modeling techniques. Physical transitions are of particular interest, as they are accompanied by pattern changes in the configurations of the systems. Detecting and characterizing pattern changes in data happens to be a particular strength of statistical modeling in data science, especially with the highly expressive and flexible neural network models that have become increasingly computationally accessible in recent years through performance improvements in both hardware and algorithmic implementations. Conceptually, the machine learning approach can be regarded as one that employing algorithms that eschew explicit instructions in favor of strategies based around pattern extraction and inference driven by statistical analysis and large complex data sets. This allows for the investigation of physical systems using only raw configurational information to make inferences instead of relying on physical information obtained from a priori knowledge of the system. This work focuses on the extraction of useful compressed representations of physical configurations from systems of interest to automate phase classification tasks in addition to the identification of critical points and crossover regions

    Very High Dimensional Semiparametric Models

    Get PDF
    Very high dimensional semiparametric models play a major role in many areas, in particular in signal detection problems when sparse signals or sparse events are hidden among high dimensional noise. Concrete examples are genomic studies in biostatistics or imaging problems. In a broad context all kind of statistical inference and model selection problems were discussed for high dimensional data

    Statistics meets Machine Learning

    Get PDF
    Theory and application go hand in hand in most areas of statistics. In a world flooded with huge amounts of data waiting to be analyzed, classified and transformed into useful outputs, the designing of fast, robust and stable algorithms has never been as important as it is today. On the other hand, irrespective of whether the focus is put on estimation, prediction, classification or other purposes, it is equally crucial to provide clear guarantees that such algorithms have strong theoretical guarantees. Many statisticians, independently of their original research interests, have become increasingly aware of the importance of the numerical needs faced in numerous applications including gene expression profiling, health care, pattern and speech recognition, data security, marketing personalization, natural language processing, to name just a few. The goal of this workshop is twofold: (a) exchange knowledge on successful algorithmic approaches and discuss some of the existing challenges, and (b) to bring together researchers in statistics and machine learning with the aim of sharing expertise and exploiting possible differences in points of views to obtain a better understanding of some of the common important problems
    • …
    corecore