41 research outputs found

    A unified view on patch aggregation

    Get PDF
    Patch-based methods are widely used in various topics of image processing, such as image restoration or image editing and synthesis. Patches capture local image geometry and structure and are much easier to model than whole images: in practice, patches are small enough to be represented by simple multivariate priors. An important question arising in all patch-based methods is the one of patch aggregation. For instance, in image restoration, restored patches are usually not compatible, in the sense that two overlapping restored patches do not necessarily yield the same values to their common pixels. A standard way to overcome this difficulty is to see the values provided by different patches at a given pixel as independent estimators of a true unknown value and to aggregate these estimators. This aggregation step usually boils down to a simple average , with uniform weights or with weights depending on the trust we have on these different estimators. In this paper, we propose a probabilistic framework aiming at a better understanding of this crucial and often neglected step. The key idea is to see the aggregation of two patches as a fusion between their models rather than a fusion of estimators. The proposed fusion operation is pretty intuitive and generalizes previous aggregation methods. It also yields a novel interpretation of the Expected Patch Log Likelihood (EPLL) proposed in [40

    PatchNR: Learning from Very Few Images by Patch Normalizing Flow Regularization

    Full text link
    Learning neural networks using only few available information is an important ongoing research topic with tremendous potential for applications. In this paper, we introduce a powerful regularizer for the variational modeling of inverse problems in imaging. Our regularizer, called patch normalizing flow regularizer (patchNR), involves a normalizing flow learned on small patches of very few images. In particular, the training is independent of the considered inverse problem such that the same regularizer can be applied for different forward operators acting on the same class of images. By investigating the distribution of patches versus those of the whole image class, we prove that our model is indeed a MAP approach. Numerical examples for low-dose and limited-angle computed tomography (CT) as well as superresolution of material images demonstrate that our method provides very high quality results. The training set consists of just six images for CT and one image for superresolution. Finally, we combine our patchNR with ideas from internal learning for performing superresolution of natural images directly from the low-resolution observation without knowledge of any high-resolution image

    PCA Reduced Gaussian Mixture Models with Applications in Superresolution

    Get PDF
    Despite the rapid development of computational hardware, the treatment of largeand high dimensional data sets is still a challenging problem. This paper providesa twofold contribution to the topic. First, we propose a Gaussian Mixture Model inconjunction with a reduction of the dimensionality of the data in each componentof the model by principal component analysis, called PCA-GMM. To learn the (lowdimensional) parameters of the mixture model we propose an EM algorithm whoseM-step requires the solution of constrained optimization problems. Fortunately,these constrained problems do not depend on the usually large number of samplesand can be solved efficiently by an (inertial) proximal alternating linearized mini-mization algorithm. Second, we apply our PCA-GMM for the superresolution of 2Dand 3D material images based on the approach of Sandeep and Jacob. Numericalresults confirm the moderate influence of the dimensionality reduction on the overallsuperresolution result.Super-résolution d'images multi-échelles en sciences des matériaux avec des attributs géométrique

    Bayesian imaging inverse problem with SA-Roundtrip prior via HMC-pCN sampler

    Full text link
    Bayesian inference with deep generative prior has received considerable interest for solving imaging inverse problems in many scientific and engineering fields. The selection of the prior distribution is learned from, and therefore an important representation learning of, available prior measurements. The SA-Roundtrip, a novel deep generative prior, is introduced to enable controlled sampling generation and identify the data's intrinsic dimension. This prior incorporates a self-attention structure within a bidirectional generative adversarial network. Subsequently, Bayesian inference is applied to the posterior distribution in the low-dimensional latent space using the Hamiltonian Monte Carlo with preconditioned Crank-Nicolson (HMC-pCN) algorithm, which is proven to be ergodic under specific conditions. Experiments conducted on computed tomography (CT) reconstruction with the MNIST and TomoPhantom datasets reveal that the proposed method outperforms state-of-the-art comparisons, consistently yielding a robust and superior point estimator along with precise uncertainty quantification

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    A Wasserstein-type distance in the space of Gaussian Mixture Models

    Get PDF
    In this paper we introduce a Wasserstein-type distance on the set of Gaussian mixture models. This distance is defined by restricting the set of possible coupling measures in the optimal transport problem to Gaussian mixture models. We derive a very simple discrete formulation for this distance, which makes it suitable for high dimensional problems. We also study the corresponding multi-marginal and barycenter formulations. We show some properties of this Wasserstein-type distance, and we illustrate its practical use with some examples in image processing

    Tackling Distribution Shift - Detection and Mitigation

    Get PDF
    One of the biggest challenges of employing supervised deep learning approaches is their inability to perform as well beyond standardized datasets in real-world applications. Therefore, abrupt changes in the form of an outlier or overall changes in data distribution after model deployment result in a performance drop. Owing to these changes that induce distributional shifts, we propose two methodologies; the first is the detection of these shifts, and the second is adapting the model to overcome the low predictive performance due to these shifts. The former usually refers to anomaly detection, the process of finding patterns in the data that do not resemble the expected behavior. Understanding the behavior of data by capturing their distribution might help us to find those rare and uncommon samples without the need for annotated data. In this thesis, we exploit the ability of generative adversarial networks (GANs) in capturing the latent representation to design a model that differentiates the expected behavior from deviated samples. Furthermore, we integrate self-supervision into generative adversarial networks to improve the predictive performance of our proposed anomaly detection model. In addition, to shift detection, we propose an ensemble approach to adapt a model under varied distributional shifts using domain adaptation. In summary, this thesis focuses on detecting shifts under the umbrella of anomaly detection as well as mitigating the effect of several distributional shifts by adapting deep learning models using a Bayesian and information theory approach
    corecore