368 research outputs found

    Feedforward data-aided phase noise estimation from a DCT basis expansion

    Get PDF
    This contribution deals with phase noise estimation from pilot symbols. The phase noise process is approximated by an expansion of discrete cosine transform (DCT) basis functions containing only a few terms. We propose a feedforward algorithm that estimates the DCT coefficients without requiring detailed knowledge about the phase noise statistics. We demonstrate that the resulting (linearized) mean-square phase estimation error consists of two contributions: a contribution from the additive noise, that equals the Cramer-Rao lower bound, and a noise independent contribution, that results front the phase noise modeling error. We investigate the effect of the symbol sequence length, the pilot symbol positions, the number of pilot symbols, and the number of estimated DCT coefficients it the estimation accuracy and on the corresponding bit error rate (PER). We propose a pilot symbol configuration allowing to estimate any number of DCT coefficients not exceeding the number of pilot Symbols, providing a considerable Performance improvement as compared to other pilot symbol configurations. For large block sizes, the DCT-based estimation algorithm substantially outperforms algorithms that estimate only the time-average or the linear trend of the carrier phase. Copyright (C) 2009 J. Bhatti and M. Moeneclaey

    Image interpolation using Shearlet based iterative refinement

    Get PDF
    This paper proposes an image interpolation algorithm exploiting sparse representation for natural images. It involves three main steps: (a) obtaining an initial estimate of the high resolution image using linear methods like FIR filtering, (b) promoting sparsity in a selected dictionary through iterative thresholding, and (c) extracting high frequency information from the approximation to refine the initial estimate. For the sparse modeling, a shearlet dictionary is chosen to yield a multiscale directional representation. The proposed algorithm is compared to several state-of-the-art methods to assess its objective as well as subjective performance. Compared to the cubic spline interpolation method, an average PSNR gain of around 0.8 dB is observed over a dataset of 200 images

    Spatial and Temporal Image Prediction with Magnitude and Phase Representations

    Get PDF
    In this dissertation, I develop the theory and techniques for spatial and temporal image prediction with the magnitude and phase representation of the Complex Wavelet Transform (CWT) or the over-complete DCT to solve the problems of image inpainting and motion compensated inter-picture prediction. First, I develop the theory and algorithms of image reconstruction from the analytic magnitude or phase of the CWT. I prove the conditions under which a signal is uniquely specified by its analytic magnitude or phase, propose iterative algorithms for the reconstruction of a signal from its analytic CWT magnitude or phase, and analyze the convergence of the proposed algorithms. Image reconstruction from the magnitude and pseudo-phase of the over-complete DCT is also discussed and demonstrated. Second, I propose simple geometrical models of the CWT magnitude and phase to describe edges and structured textures and develop a spatial image prediction (inpainting) algorithm based on those models and the iterative image reconstruction mentioned above. Piecewise smooth signals, structured textures and their mixtures can be predicted successfully with the proposed algorithm. Simulation results show that the proposed algorithm achieves appealing visual quality with low computational complexity. Finally, I propose a novel temporal (inter-picture) image predictor for hybrid video coding. The proposed predictor enables successful predictive coding during fades, blended scenes, temporally decorrelated noise, and many other temporal evolutions that are beyond the capability of the traditional motion compensated prediction methods. The proposed predictor estimates the transform magnitude and phase of the desired motion compensated prediction by exploiting the temporal and spatial correlations of the transform coefficients. For the case of implementation in standard hybrid video coders, the over-complete DCT is chosen over the CWT. Better coding performance is achieved with the state-of-the-art H.264/AVC video encoder equipped with the proposed predictor. The proposed predictor is also successfully applied to image registration

    Quadtree Structured Approximation Algorithms

    Get PDF
    The success of many image restoration algorithms is often due to their ability to sparsely describe the original signal. Many sparse promoting transforms exist, including wavelets, the so called ‘lets’ family of transforms and more recent non-local learned transforms. The first part of this thesis reviews sparse approximation theory, particularly in relation to 2-D piecewise polynomial signals. We also show the connection between this theory and current state of the art algorithms that cover the following image restoration and enhancement applications: denoising, deconvolution, interpolation and multi-view super resolution. In [63], Shukla et al. proposed a compression algorithm, based on a sparse quadtree decomposition model, which could optimally represent piecewise polynomial images. In the second part of this thesis we adapt this model to image restoration by changing the rate-distortion penalty to a description-length penalty. Moreover, one of the major drawbacks of this type of approximation is the computational complexity required to find a suitable subspace for each node of the quadtree. We address this issue by searching for a suitable subspace much more efficiently using the mathematics of updating matrix factorisations. Novel algorithms are developed to tackle the four problems previously mentioned. Simulation results indicate that we beat state of the art results when the original signal is in the model (e.g. depth images) and are competitive for natural images when the degradation is high.Open Acces

    A Survey of the methods on fingerprint orientation field estimation

    Get PDF
    Fingerprint orientation field (FOF) estimation plays a key role in enhancing the performance of the automated fingerprint identification system (AFIS): Accurate estimation of FOF can evidently improve the performance of AFIS. However, despite the enormous attention on the FOF estimation research in the past decades, the accurate estimation of FOFs, especially for poor-quality fingerprints, still remains a challenging task. In this paper, we devote to review and categorization of the large number of FOF estimation methods proposed in the specialized literature, with particular attention to the most recent work in this area. Broadly speaking, the existing FOF estimation methods can be grouped into three categories: gradient-based methods, mathematical models-based methods, and learning-based methods. Identifying and explaining the advantages and limitations of these FOF estimation methods is of fundamental importance for fingerprint identification, because only a full understanding of the nature of these methods can shed light on the most essential issues for FOF estimation. In this paper, we make a comprehensive discussion and analysis of these methods concerning their advantages and limitations. We have also conducted experiments using publically available competition dataset to effectively compare the performance of the most relevant algorithms and methods

    Centralized and distributed semi-parametric compression of piecewise smooth functions

    No full text
    This thesis introduces novel wavelet-based semi-parametric centralized and distributed compression methods for a class of piecewise smooth functions. Our proposed compression schemes are based on a non-conventional transform coding structure with simple independent encoders and a complex joint decoder. Current centralized state-of-the-art compression schemes are based on the conventional structure where an encoder is relatively complex and nonlinear. In addition, the setting usually allows the encoder to observe the entire source. Recently, there has been an increasing need for compression schemes where the encoder is lower in complexity and, instead, the decoder has to handle more computationally intensive tasks. Furthermore, the setup may involve multiple encoders, where each one can only partially observe the source. Such scenario is often referred to as distributed source coding. In the first part, we focus on the dual situation of the centralized compression where the encoder is linear and the decoder is nonlinear. Our analysis is centered around a class of 1-D piecewise smooth functions. We show that, by incorporating parametric estimation into the decoding procedure, it is possible to achieve the same distortion- rate performance as that of a conventional wavelet-based compression scheme. We also present a new constructive approach to parametric estimation based on the sampling results of signals with finite rate of innovation. The second part of the thesis focuses on the distributed compression scenario, where each independent encoder partially observes the 1-D piecewise smooth function. We propose a new wavelet-based distributed compression scheme that uses parametric estimation to perform joint decoding. Our distortion-rate analysis shows that it is possible for the proposed scheme to achieve that same compression performance as that of a joint encoding scheme. Lastly, we apply the proposed theoretical framework in the context of distributed image and video compression. We start by considering a simplified model of the video signal and show that we can achieve distortion-rate performance close to that of a joint encoding scheme. We then present practical compression schemes for real world signals. Our simulations confirm the improvement in performance over classical schemes, both in terms of the PSNR and the visual quality

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio
    corecore