15 research outputs found

    Multiscale Hybrid Nonlocal Means Filtering Using Modified Similarity Measure

    Get PDF
    A new multiscale implementation of nonlocal means filtering (MHNLM) for image denoising is proposed. The proposed algorithm also introduces a modification of the similarity measure for patch comparison. Assuming the patch as an oriented surface, the notion of a normal vectors patch is introduced. The inner product of these normal vectors patches is defined and then used in the weighted Euclidean distance of intensity patches as the weight factor. The algorithm involves two steps: the first step is a multiscale implementation of an accelerated nonlocal means filtering in the discrete stationary wavelet domain to obtain a refined version of the noisy patches for later comparison. The next step is to apply the proposed modification of standard nonlocal means filtering to the noisy image using the reference patches obtained in the first step. These refined patches contain less noise, and consequently the computation of normal vectors and partial derivatives is more precise. Experimental results show equivalent or better performance of the proposed algorithm compared to various state-of-the-art algorithms

    Hybrid Deep Learning Framework for Reduction of Mixed Noise via Low Rank Noise Estimation

    Get PDF
    In this paper, an innovative hybridized deep learning framework (EN-CNN) is presented for image noise reduction where the noise originates from heterogeneous sources. More specifically, EN-CNN is applied to the benchmark natural images affected by a mixture of additive white gaussian noise (AWGN) and impulsive noise (IN). Reduction of mixed noise (AWGN and IN) is relatively more involved as compared to removing simply one type of noise. In fact, mitigating the impact of a mixture of multiple noise types becomes exceedingly challenging due to simultaneous presence of different noise statistics. Although, various effective deep learning approaches and the classical state-of-the-art approaches like WNNM have been used to suppress AWGN noise only, the same techniques are not suitable in case of mixed noise. In this context, EN-CNN can not only infer changed noise statistics but can also effectively eliminate residual noise. Firstly, EN-CNN employs the classical method of neighborhood filtering followed by non-local low rank estimation to respectively reduce IN noise and estimate the residual noise characteristics after reducing IN noise. As a result of this step, we obtain a pre-processed image with residual noise statistics. Secondly, convolutional neural network (CNN) is applied to the pre-processed image based on the noise statistics inferred in the first step. This two pronged strategy, in conjunction with the deep learning mechanism, effectively handles the mixed noise suppression. As a result, the suggested framework yields promising results as compared to various state-of-the-art approaches.publishedVersio

    Wavelet decompositions and function spaces on the unit cube

    No full text
    The main objective of this thesis is to develop a new construction of wavelet bases on the unit interval and to establish a characterization of Besov spaces on the unit cube by wavelet coefficients. This thesis consists of three parts. First, we consider several constructions of (bi-)orthogonal wavelet bases on the unit interval. From a practical point of view, these constructions have a common disadvantage. We provide a new construction of biorthogonal wavelet bases on the unit interval that avoids this common disadvantage of earlier constructions while preserving their advantages. Second, we introduce certain general families of functions that include all wavelet bases on the unit interval. These families are not necessarily derived by dilations and translations of a function. With these families, we establish Littlewood-Paley type estimates of Besov spaces. Combining these estimates and wavelet decompositions from the first part, we characterize Besov spaces on the unit cube by the wavelet coefficients. Finally, we provide a constructive wavelet decomposition of L\sb{p}\ (1\le p\le\infty) into box splines. To calculate wavelet coefficients, we apply the local polynomial L\sb2-approximation and then quasi-interpolation techniques. Furthermore, we characterize the Besov space, B\sbsp{q}{\alpha}(L\sb{q}), by the constructive wavelet coefficients with an explicit form. We also provide the characterization of the Besov space on the unit cube. DeVore et al. have independently studied this characterization with a different proof; however, they have used a nonconstructive local approximation rather than the local L\sb2-approximation. Our explicit formula calculating wavelet coefficients can be directly employed for numerical implementations

    Structure of Optimal State Discrimination in Generalized Probabilistic Theories

    No full text
    We consider optimal state discrimination in a general convex operational framework, so-called generalized probabilistic theories (GPTs), and present a general method of optimal discrimination by applying the complementarity problem from convex optimization. The method exploits the convex geometry of states but not other detailed conditions or relations of states and effects. We also show that properties in optimal quantum state discrimination are shared in GPTs in general: (i) no measurement sometimes gives optimal discrimination, and (ii) optimal measurement is not unique

    Bayesian inference and model selection in latent class logit models with parameter constraints: An application to market segmentation

    No full text
    Latent class models have recently drawn considerable attention among many researchers and practitioners as a class of useful tools for capturing heterogeneity across different segments in a target market or population. In this paper, we consider a latent class logit model with parameter constraints and deal with two important issues in the latent class models--parameter estimation and selection of an appropriate number of classes--within a Bayesian framework. A simple Gibbs sampling algorithm is proposed for sample generation from the posterior distribution of unknown parameters. Using the Gibbs output, we propose a method for determining an appropriate number of the latent classes. A real-world marketing example as an application for market segmentation is provided to illustrate the proposed method.
    corecore