30 research outputs found

    Multiresolution image models and estimation techniques

    Get PDF

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Perceptually inspired image estimation and enhancement

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (p. 137-144).In this thesis, we present three image estimation and enhancement algorithms inspired by human vision. In the first part of the thesis, we propose an algorithm for mapping one image to another based on the statistics of a training set. Many vision problems can be cast as image mapping problems, such as, estimating reflectance from luminance, estimating shape from shading, separating signal and noise, etc. Such problems are typically under-constrained, and yet humans are remarkably good at solving them. Classic computational theories about the ability of the human visual system to solve such under-constrained problems attribute this feat to the use of some intuitive regularities of the world, e.g., surfaces tend to be piecewise constant. In recent years, there has been considerable interest in deriving more sophisticated statistical constraints from natural images, but because of the high-dimensional nature of images, representing and utilizing the learned models remains a challenge. Our techniques produce models that are very easy to store and to query. We show these techniques to be effective for a number of applications: removing noise from images, estimating a sharp image from a blurry one, decomposing an image into reflectance and illumination, and interpreting lightness illusions. In the second part of the thesis, we present an algorithm for compressing the dynamic range of an image while retaining important visual detail. The human visual system confronts a serious challenge with dynamic range, in that the physical world has an extremely high dynamic range, while neurons have low dynamic ranges.(cont.) The human visual system performs dynamic range compression by applying automatic gain control, in both the retina and the visual cortex. Taking inspiration from that, we designed techniques that involve multi-scale subband transforms and smooth gain control on subband coefficients, and resemble the contrast gain control mechanism in the visual cortex. We show our techniques to be successful in producing dynamic-range-compressed images without compromising the visibility of detail or introducing artifacts. We also show that the techniques can be adapted for the related problem of "companding", in which a high dynamic range image is converted to a low dynamic range image and saved using fewer bits, and later expanded back to high dynamic range with minimal loss of visual quality. In the third part of the thesis, we propose a technique that enables a user to easily localize image and video editing by drawing a small number of rough scribbles. Image segmentation, usually treated as an unsupervised clustering problem, is extremely difficult to solve. With a minimal degree of user supervision, however, we are able to generate selection masks with good quality. Our technique learns a classifier using the user-scribbled pixels as training examples, and uses the classifier to classify the rest of the pixels into distinct classes. It then uses the classification results as per-pixel data terms, combines them with a smoothness term that respects color discontinuities, and generates better results than state-of-art algorithms for interactive segmentation.by Yuanzhen Li.Ph.D

    MULTIRIDGELETS FOR TEXTURE ANALYSIS

    Get PDF
    Directional wavelets have orientation selectivity and thus are able to efficiently represent highly anisotropic elements such as line segments and edges. Ridgelet transform is a kind of directional multi-resolution transform and has been successful in many image processing and texture analysis applications. The objective of this research is to develop multi-ridgelet transform by applying multiwavelet transform to the Radon transform so as to attain attractive improvements. By adapting the cardinal orthogonal multiwavelets to the ridgelet transform, it is shown that the proposed cardinal multiridgelet transform (CMRT) possesses cardinality, approximate translation invariance, and approximate rotation invariance simultaneously, whereas no single ridgelet transform can hold all these properties at the same time. These properties are beneficial to image texture analysis. This is demonstrated in three studies of texture analysis applications. Firstly a texture database retrieval study taking a portion of the Brodatz texture album as an example has demonstrated that the CMRT-based texture representation for database retrieval performed better than other directional wavelet methods. Secondly the study of the LCD mura defect detection was based upon the classification of simulated abnormalities with a linear support vector machine classifier, the CMRT-based analysis of defects were shown to provide efficient features for superior detection performance than other competitive methods. Lastly and the most importantly, a study on the prostate cancer tissue image classification was conducted. With the CMRT-based texture extraction, Gaussian kernel support vector machines have been developed to discriminate prostate cancer Gleason grade 3 versus grade 4. Based on a limited database of prostate specimens, one classifier was trained to have remarkable test performance. This approach is unquestionably promising and is worthy to be fully developed

    Local Geometric Transformations in Image Analysis

    Get PDF
    The characterization of images by geometric features facilitates the precise analysis of the structures found in biological micrographs such as cells, proteins, or tissues. In this thesis, we study image representations that are adapted to local geometric transformations such as rotation, translation, and scaling, with a special emphasis on wavelet representations. In the first part of the thesis, our main interest is in the analysis of directional patterns and the estimation of their location and orientation. We explore steerable representations that correspond to the notion of rotation. Contrarily to classical pattern matching techniques, they have no need for an a priori discretization of the angle and for matching the filter to the image at each discretized direction. Instead, it is sufficient to apply the filtering only once. Then, the rotated filter for any arbitrary angle can be determined by a systematic and linear transformation of the initial filter. We derive the Cramér-Rao bounds for steerable filters. They allow us to select the best harmonics for the design of steerable detectors and to identify their optimal radial profile. We propose several ways to construct optimal representations and to build powerful and effective detector schemes; in particular, junctions of coinciding branches with local orientations. The basic idea of local transformability and the general principles that we utilize to design steerable wavelets can be applied to other geometric transformations. Accordingly, in the second part, we extend our framework to other transformation groups, with a particular interest in scaling. To construct representations in tune with a notion of local scale, we identify the possible solutions for scalable functions and give specific criteria for their applicability to wavelet schemes. Finally, we propose discrete wavelet frames that approximate a continuous wavelet transform. Based on these results, we present a novel wavelet-based image-analysis software that provides a fast and automatic detection of circular patterns, combined with a precise estimation of their size

    Pedestrian Detection Algorithms using Shearlets

    Get PDF
    In this thesis, we investigate the applicability of the shearlet transform for the task of pedestrian detection. Due to the usage of in several emerging technologies, such as automated or autonomous vehicles, pedestrian detection has evolved into a key topic of research in the last decade. In this time period, a wealth of different algorithms has been developed. According to the current results on the Caltech Pedestrian Detection Benchmark the algorithms can be divided into two categories. First, application of hand-crafted image features and of a classifier trained on these features. Second, methods using Convolutional Neural Networks in which features are learned during the training phase. It is studied how both of these types of procedures can be further improved by the incorporation of shearlets, a framework for image analysis which has a comprehensive theoretical basis

    A Novel Multimodal Image Fusion Method Using Hybrid Wavelet-based Contourlet Transform

    Full text link
    Various image fusion techniques have been studied to meet the requirements of different applications such as concealed weapon detection, remote sensing, urban mapping, surveillance and medical imaging. Combining two or more images of the same scene or object produces a better application-wise visible image. The conventional wavelet transform (WT) has been widely used in the field of image fusion due to its advantages, including multi-scale framework and capability of isolating discontinuities at object edges. However, the contourlet transform (CT) has been recently adopted and applied to the image fusion process to overcome the drawbacks of WT with its own advantages. Based on the experimental studies in this dissertation, it is proven that the contourlet transform is more suitable than the conventional wavelet transform in performing the image fusion. However, it is important to know that the contourlet transform also has major drawbacks. First, the contourlet transform framework does not provide shift-invariance and structural information of the source images that are necessary to enhance the fusion performance. Second, unwanted artifacts are produced during the image decomposition process via contourlet transform framework, which are caused by setting some transform coefficients to zero for nonlinear approximation. In this dissertation, a novel fusion method using hybrid wavelet-based contourlet transform (HWCT) is proposed to overcome the drawbacks of both conventional wavelet and contourlet transforms, and enhance the fusion performance. In the proposed method, Daubechies Complex Wavelet Transform (DCxWT) is employed to provide both shift-invariance and structural information, and Hybrid Directional Filter Bank (HDFB) is used to achieve less artifacts and more directional information. DCxWT provides shift-invariance which is desired during the fusion process to avoid mis-registration problem. Without the shift-invariance, source images are mis-registered and non-aligned to each other; therefore, the fusion results are significantly degraded. DCxWT also provides structural information through its imaginary part of wavelet coefficients; hence, it is possible to preserve more relevant information during the fusion process and this gives better representation of the fused image. Moreover, HDFB is applied to the fusion framework where the source images are decomposed to provide abundant directional information, less complexity, and reduced artifacts. The proposed method is applied to five different categories of the multimodal image fusion, and experimental study is conducted to evaluate the performance of the proposed method in each multimodal fusion category using suitable quality metrics. Various datasets, fusion algorithms, pre-processing techniques and quality metrics are used for each fusion category. From every experimental study and analysis in each fusion category, the proposed method produced better fusion results than the conventional wavelet and contourlet transforms; therefore, its usefulness as a fusion method has been validated and its high performance has been verified
    corecore