5,866 research outputs found

    Optimising Spatial and Tonal Data for PDE-based Inpainting

    Full text link
    Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods

    Spline regression for zero-inflated models

    Full text link
    We propose a regression model for count data when the classical generalized linear model approach is too rigid due to a high outcome of zero counts and a nonlinear influence of continuous covariates. Zero-Inflation is applied to take into account the presence of excess zeros with separate link functions for the zero and the nonzero component. Nonlinearity in covariates is captured by spline functions based on B-splines. Our algorithm relies on maximum-likelihood estimation and allows for adaptive box-constrained knots, thus improving the goodness of the spline fit and allowing for detection of sensitivity changepoints. A simulation study substantiates the numerical stability of the algorithm to infer such models. The AIC criterion is shown to serve well for model selection, in particular if nonlinearities are weak such that BIC tends to overly simplistic models. We fit the introduced models to real data of children's dental sanity, linking caries counts with the so-called Body-Mass-Index (BMI) and other socioeconomic factors. This reveals a puzzling nonmonotonic influence of BMI on caries counts which is yet to be explained by clinical experts

    IGA-based Multi-Index Stochastic Collocation for random PDEs on arbitrary domains

    Full text link
    This paper proposes an extension of the Multi-Index Stochastic Collocation (MISC) method for forward uncertainty quantification (UQ) problems in computational domains of shape other than a square or cube, by exploiting isogeometric analysis (IGA) techniques. Introducing IGA solvers to the MISC algorithm is very natural since they are tensor-based PDE solvers, which are precisely what is required by the MISC machinery. Moreover, the combination-technique formulation of MISC allows the straight-forward reuse of existing implementations of IGA solvers. We present numerical results to showcase the effectiveness of the proposed approach.Comment: version 3, version after revisio

    Multiple Testing and Variable Selection along Least Angle Regression's path

    Full text link
    In this article, we investigate multiple testing and variable selection using Least Angle Regression (LARS) algorithm in high dimensions under the Gaussian noise assumption. LARS is known to produce a piecewise affine solutions path with change points referred to as knots of the LARS path. The cornerstone of the present work is the expression in closed form of the exact joint law of K-uplets of knots conditional on the variables selected by LARS, namely the so-called post-selection joint law of the LARS knots. Numerical experiments demonstrate the perfect fit of our finding. Our main contributions are three fold. First, we build testing procedures on variables entering the model along the LARS path in the general design case when the noise level can be unknown. This testing procedures are referred to as the Generalized t-Spacing tests (GtSt) and we prove that they have exact non-asymptotic level (i.e., Type I error is exactly controlled). In that way, we extend a work from (Taylor et al., 2014) where the Spacing test works for consecutive knots and known variance. Second, we introduce a new exact multiple false negatives test after model selection in the general design case when the noise level can be unknown. We prove that this testing procedure has exact non-asymptotic level for general design and unknown noise level. Last, we give an exact control of the false discovery rate (FDR) under orthogonal design assumption. Monte-Carlo simulations and a real data experiment are provided to illustrate our results in this case. Of independent interest, we introduce an equivalent formulation of LARS algorithm based on a recursive function.Comment: 62 pages; new: FDR control and power comparison between Knockoff, FCD, Slope and our proposed method; new: the introduction has been revised and now present a synthetic presentation of the main results. We believe that this introduction brings new insists compared to previous version
    • …
    corecore