13 research outputs found

    Novel Video Completion Approaches and Their Applications

    Get PDF
    Video completion refers to automatically restoring damaged or removed objects in a video sequence, with applications ranging from sophisticated video removal of undesired static or dynamic objects to correction of missing or corrupted video frames in old movies and synthesis of new video frames to add, modify, or generate a new visual story. The video completion problem can be solved using texture synthesis and/or data interpolation to fill-in the holes of the sequence inward. This thesis makes a distinction between still image completion and video completion. The latter requires visually pleasing consistency by taking into account the temporal information. Based on their applied concepts, video completion techniques are categorized as inpainting and texture synthesis. We present a bandlet transform-based technique for each of these categories of video completion techniques. The proposed inpainting-based technique is a 3D volume regularization scheme that takes advantage of bandlet bases for exploiting the anisotropic regularities to reconstruct a damaged video. The proposed exemplar-based approach, on the other hand, performs video completion using a precise patch fusion in the bandlet domain instead of patch replacement. The video completion task is extended to two important applications in video restoration. First, we develop an automatic video text detection and removal that benefits from the proposed inpainting scheme and a novel video text detector. Second, we propose a novel video super-resolution technique that employs the inpainting algorithm spatially in conjunction with an effective structure tensor, generated using bandlet geometry. The experimental results show a good performance of the proposed video inpainting method and demonstrate the effectiveness of bandlets in video completion tasks. The proposed video text detector and the video super resolution scheme also show a high performance in comparison with existing methods

    Effective SAR image despeckling based on bandlet and SRAD

    Get PDF
    Despeckling of a SAR image without losing features of the image is a daring task as it is intrinsically affected by multiplicative noise called speckle. This thesis proposes a novel technique to efficiently despeckle SAR images. Using an SRAD filter, a Bandlet transform based filter and a Guided filter, the speckle noise in SAR images is removed without losing the features in it. Here a SAR image input is given parallel to both SRAD and Bandlet transform based filters. The SRAD filter despeckles the SAR image and the despeckled output image is used as a reference image for the guided filter. In the Bandlet transform based despeckling scheme, the input SAR image is first decomposed using the bandlet transform. Then the coefficients obtained are thresholded using a soft thresholding rule. All coefficients other than the low-frequency ones are so adjusted. The generalized cross-validation (GCV) technique is employed here to find the most favorable threshold for each subband. The bandlet transform is able to extract edges and fine features in the image because it finds the direction where the function gives maximum value and in the same direction it builds extended orthogonal vectors. Simple soft thresholding using an optimum threshold despeckles the input SAR image. The guided filter with the help of a reference image removes the remaining speckle from the bandlet transform output. In terms of numerical and visual quality, the proposed filtering scheme surpasses the available despeckling schemes

    ieee access special section editorial emotion aware mobile computing

    Get PDF
    With the rapid development of smart phones and wireless technology, mobile services and applications in the world are growing rapidly. Advanced mobile computing and communications greatly enhance users' experience by the notion of "carrying small while enjoying large", which have brought a huge impact to all aspects of people's lifestyles in terms of work, social, and economy. Although these advanced techniques have extensively improved users' quality of experience (QoE), it is not adequate to provide affective services without efficient mechanisms of emotion-aware mobile computing, which includes various unique aspects, e.g., mobile data sensing and transmissions; sentiment analysis and emotion recognition; affective interaction. Under the new service paradigm, novel mobile services and innovative applications need to be extensively investigated to gain the great potentials brought by emotion-aware mobile computing

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Get PDF
    This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV) inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well

    A multi-frame super-resolution algorithm using pocs and wavelet

    Get PDF
    Super-Resolution (SR) is a generic term, referring to a series of digital image processing techniques in which a high resolution (HR) image is reconstructed from a set of low resolution (LR) video frames or images. In other words, a HR image is obtained by integrating several LR frames captured from the same scene within a very short period of time. Constructing a SR image is a process that may require a lot of computational resources. To solve this problem, the SR reconstruction process involves 3 steps, namely image registration, degrading function estimation and image restoration. In this thesis, the fundamental process steps in SR image reconstruction algorithms are first introduced. Several known SR image reconstruction approaches are then discussed in detail. These SR reconstruction methods include: (1) traditional interpolation, (2) the frequency domain approach, (3) the inverse back-projection (IBP), (4) the conventional projections onto convex sets (POCS) and (5) regularized inverse optimization. Based on the analysis of some of the existing methods, a Wavelet-based POCS SR image reconstruction method is proposed. The new method is an extension of the conventional POCS method, that performs some convex projection operations in the Wavelet domain. The stochastic Wavelet coefficient refinement technique is used to adjust the Wavelet sub-image coefficients of the estimated HR image according to the stochastic F-distribution in order to eliminate the noisy or wrongly estimated pixels. The proposed SR method enhances the resulting quality of the reconstructed HR image, while retaining the simplicity of the conventional POCS method as well as increasing the convergence speed of POCS iterations. Simulation results show that the proposed Wavelet-based POCS iterative algorithm has led to some distinct features and performance improvement as compared to some of the SR approaches reviewed in this thesis

    Mathematical Approaches for Image Enhancement Problems

    Get PDF
    This thesis develops novel techniques that can solve some image enhancement problems using theoretically and technically proven and very useful mathematical tools to image processing such as wavelet transforms, partial differential equations, and variational models. Three subtopics are mainly covered. First, color image denoising framework is introduced to achieve high quality denoising results by considering correlations between color components while existing denoising approaches can be plugged in flexibly. Second, a new and efficient framework for image contrast and color enhancement in the compressed wavelet domain is proposed. The proposed approach is capable of enhancing both global and local contrast and brightness as well as preserving color consistency. The framework does not require inverse transform for image enhancement since linear scale factors are directly applied to both scaling and wavelet coefficients in the compressed domain, which results in high computational efficiency. Also contaminated noise in the image can be efficiently reduced by introducing wavelet shrinkage terms adaptively in different scales. The proposed method is able to enhance a wavelet-coded image computationally efficiently with high image quality and less noise or other artifact. The experimental results show that the proposed method produces encouraging results both visually and numerically compared to some existing approaches. Finally, image inpainting problem is discussed. Literature review, psychological analysis, and challenges on image inpainting problem and related topics are described. An inpainting algorithm using energy minimization and texture mapping is proposed. Mumford-Shah energy minimization model detects and preserves edges in the inpainting domain by detecting both the main structure and the detailed edges. This approach utilizes faster hierarchical level set method and guarantees convergence independent of initial conditions. The estimated segmentation results in the inpainting domain are stored in segmentation map, which is referred by a texture mapping algorithm for filling textured regions. We also propose an inpainting algorithm using wavelet transform that can expect better global structure estimation of the unknown region in addition to shape and texture properties since wavelet transforms have been used for various image analysis problems due to its nice multi-resolution properties and decoupling characteristics

    3D exemplar-based image inpainting in electron microscopy

    Get PDF
    In electron microscopy (EM) a common problem is the non-availability of data, which causes artefacts in reconstructions. In this thesis the goal is to generate artificial data where missing in EM by using exemplar-based inpainting (EBI). We implement an accelerated 3D version tailored to applications in EM, which reduces reconstruction times from days to minutes. We develop intelligent sampling strategies to find optimal data as input for reconstruction methods. Further, we investigate approaches to reduce electron dose and acquisition time. Sparse sampling followed by inpainting is the most promising approach. As common evaluation measures may lead to misinterpretation of results in EM and falsify a subsequent analysis, we propose to use application driven metrics and demonstrate this in a segmentation task. A further application of our technique is the artificial generation of projections in tiltbased EM. EBI is used to generate missing projections, such that the full angular range is covered. Subsequent reconstructions are significantly enhanced in terms of resolution, which facilitates further analysis of samples. In conclusion, EBI proves promising when used as an additional data generation step to tackle the non-availability of data in EM, which is evaluated in selected applications. Enhancing adaptive sampling methods and refining EBI, especially considering the mutual influence, promotes higher throughput in EM using less electron dose while not lessening quality.Ein häufig vorkommendes Problem in der Elektronenmikroskopie (EM) ist die Nichtverfügbarkeit von Daten, was zu Artefakten in Rekonstruktionen führt. In dieser Arbeit ist es das Ziel fehlende Daten in der EM künstlich zu erzeugen, was durch Exemplar-basiertes Inpainting (EBI) realisiert wird. Wir implementieren eine auf EM zugeschnittene beschleunigte 3D Version, welche es ermöglicht, Rekonstruktionszeiten von Tagen auf Minuten zu reduzieren. Wir entwickeln intelligente Abtaststrategien, um optimale Datenpunkte für die Rekonstruktion zu erhalten. Ansätze zur Reduzierung von Elektronendosis und Aufnahmezeit werden untersucht. Unterabtastung gefolgt von Inpainting führt zu den besten Resultaten. Evaluationsmaße zur Beurteilung der Rekonstruktionsqualität helfen in der EM oft nicht und können zu falschen Schlüssen führen, weswegen anwendungsbasierte Metriken die bessere Wahl darstellen. Dies demonstrieren wir anhand eines Beispiels. Die künstliche Erzeugung von Projektionen in der neigungsbasierten Elektronentomographie ist eine weitere Anwendung. EBI wird verwendet um fehlende Projektionen zu generieren. Daraus resultierende Rekonstruktionen weisen eine deutlich erhöhte Auflösung auf. EBI ist ein vielversprechender Ansatz, um nicht verfügbare Daten in der EM zu generieren. Dies wird auf Basis verschiedener Anwendungen gezeigt und evaluiert. Adaptive Aufnahmestrategien und EBI können also zu einem höheren Durchsatz in der EM führen, ohne die Bildqualität merklich zu verschlechtern

    Wavelet-based noise reduction of cDNA microarray images

    Get PDF
    The advent of microarray imaging technology has lead to enormous progress in the life sciences by allowing scientists to analyze the expression of thousands of genes at a time. For complementary DNA (cDNA) microarray experiments, the raw data are a pair of red and green channel images corresponding to the treatment and control samples. These images are contaminated by a high level of noise due to the numerous noise sources affecting the image formation. A major challenge of microarray image analysis is the extraction of accurate gene expression measurements from the noisy microarray images. A crucial step in this process is denoising, which consists of reducing the noise in the observed microarray images while preserving the signal information as much as possible. This thesis deals with the problem of developing novel methods for reducing noise in cDNA microarray images for accurate estimation of the gene expression levels. Denoising methods based on the wavelet transform have shown significant success when applied to natural images. However, these methods are not very efficient for reducing noise in cDNA microarray images. An important reason for this is that existing methods are only capable of processing the red and green channel images separately. In doing so. they ignore the signal correlation as well as the noise correlation that exists between the wavelet coefficients of the two channels. The primary objective of this research is to design efficient wavelet-based noise reduction algorithms for cDNA microarray images that take into account these inter-channel dependencies by 'jointly' estimating the noise-free coefficients in both the channels. Denoising algorithms are developed using two types of wavelet transforms, namely, the frequently-used discrete wavelet transform (DWT) and the complex wavelet transform (CWT). The main advantage of using the DWT for denoising is that this transform is computationally very efficient. In order to obtain a better denoising performance for microarray images, however, the CWT is preferred to DWT because the former has good directional selectivity properties that are necessary for better representation of the circular edges of spots. The linear minimum mean squared error and maximum a posteriori estimation techniques are used to develop bivariate estimators for the noise-free coefficients of the two images. These estimators are derived by utilizing appropriate joint probability density functions for the image coefficients as well as the noise coefficients of the two channels. Extensive experimentations are carried out on a large set of cDNA microarray images to evaluate the performance of the proposed denoising methods as compared to the existing ones. Comparisons are made using standard metrics such as the peak signal-to-noise ratio (PSNR) for measuring the amount of noise removed from the pixels of the images, and the mean absolute error for measuring the accuracy of the estimated log-intensity ratios obtained from the denoised version of the images. Results indicate that the proposed denoising methods that are developed specifically for the microarray images do, indeed, lead to more accurate estimation of gene expression levels. Thus, it is expected that the proposed methods will play a significant role in improving the reliability of the results obtained from practical microarray experiments

    Image Restoration Methods for Retinal Images: Denoising and Interpolation

    Get PDF
    Retinal imaging provides an opportunity to detect pathological and natural age-related physiological changes in the interior of the eye. Diagnosis of retinal abnormality requires an image that is sharp, clear and free of noise and artifacts. However, to prevent tissue damage, retinal imaging instruments use low illumination radiation, hence, the signal-to-noise ratio (SNR) is reduced which means the total noise power is increased. Furthermore, noise is inherent in some imaging techniques. For example, in Optical Coherence Tomography (OCT) speckle noise is produced due to the coherence between the unwanted backscattered light. Improving OCT image quality by reducing speckle noise increases the accuracy of analyses and hence the diagnostic sensitivity. However, the challenge is to preserve image features while reducing speckle noise. There is a clear trade-off between image feature preservation and speckle noise reduction in OCT. Averaging multiple OCT images taken from a unique position provides a high SNR image, but it drastically increases the scanning time. In this thesis, we develop a multi-frame image denoising method for Spectral Domain OCT (SD-OCT) images extracted from a very close locations of a SD-OCT volume. The proposed denoising method was tested using two dictionaries: nonlinear (NL) and KSVD-based adaptive dictionary. The NL dictionary was constructed by adding phases, polynomial, exponential and boxcar functions to the conventional Discrete Cosine Transform (DCT) dictionary. The proposed denoising method denoises nearby frames of SD-OCT volume using a sparse representation method and combines them by selecting median intensity pixels from the denoised nearby frames. The result showed that both dictionaries reduced the speckle noise from the OCT images; however, the adaptive dictionary showed slightly better results at the cost of a higher computational complexity. The NL dictionary was also used for fundus and OCT image reconstruction. The performance of the NL dictionary was always better than that of other analytical-based dictionaries, such as DCT and Haar. The adaptive dictionary involves a lengthy dictionary learning process, and therefore cannot be used in real situations. We dealt this problem by utilizing a low-rank approximation. In this approach SD-OCT frames were divided into a group of noisy matrices that consist of non-local similar patches. A noise-free patch matrix was obtained from a noisy patch matrix utilizing a low-rank approximation. The noise-free patches from nearby frames were averaged to enhance the denoising. The denoised image obtained from the proposed approach was better than those obtained by several state-of-the-art methods. The proposed approach was extended to jointly denoise and interpolate SD-OCT image. The results show that joint denoising and interpolation method outperforms several existing state-of-the-art denoising methods plus bicubic interpolation.4 month
    corecore