72 research outputs found

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF

    Single-image super-resolution using sparsity constraints and non-local similarities at multiple resolution scales

    Get PDF
    Traditional super-resolution methods produce a clean high-resolution image from several observed degraded low-resolution images following an acquisition or degradation model. Such a model describes how each output pixel is related to one or more input pixels and it is called data fidelity term in the regularization framework. Additionally, prior knowledge such as piecewise smoothness can be incorporated to improve the image restoration result. The impact of an observed pixel on the restored pixels is thus local according to the degradation model and the prior knowledge. Therefore, the traditional methods only exploit the spatial redundancy in a local neighborhood and are therefore referred to as local methods. Recently, non-local methods, which make use of similarities between image patches across the whole image, have gained popularity in image restoration in general. In super-resolution literature they are often referred to as exemplar-based methods. In this paper, we exploit the similarity of patches within the same scale (which is related to the class of non-local methods) and across different resolution scales of the same image (which is also related to the fractal-based methods). For patch fusion, we employ a kernel regression algorithm, which yields a blurry and noisy version of the desired high-resolution image. For the final reconstruction step, we develop a novel restoration algorithm. The joint deconvolution/denoising algorithm is based on the split Bregman iterations and, as prior knowledge, the algorithm exploits the sparsity of the image in the shearlet-transformed domain. Initial results indicate an improvement over both classical local and state-of-the art non-local super-resolution methods

    Comparative Analysis and Fusion of MRI and PET Images based on Wavelets for Clinical Diagnosis

    Get PDF
    Nowadays, Medical imaging modalities like Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Tomography (SPECT), and Computed Tomography (CT) play a crucial role in clinical diagnosis and treatment planning. The images obtained from each of these modalities contain complementary information of the organ imaged. Image fusion algorithms are employed to bring all of this disparate information together into a single image, allowing doctors to diagnose disorders quickly. This paper proposes a novel technique for the fusion of MRI and PET images based on YUV color space and wavelet transform. Quality assessment based on entropy showed that the method can achieve promising results for medical image fusion. The paper has done a comparative analysis of the fusion of MRI and PET images using different wavelet families at various decomposition levels for the detection of brain tumors as well as Alzheimer’s disease. The quality assessment and visual analysis showed that the Dmey wavelet at decomposition level 3 is optimum for the fusion of MRI and PET images. This paper also compared the results of several fusion rules such as average, maximum, and minimum, finding that the maximum fusion rule outperformed the other two

    Image Fusion Based on Shearlets

    Get PDF
    corecore