134 research outputs found

    MULTIRIDGELETS FOR TEXTURE ANALYSIS

    Get PDF
    Directional wavelets have orientation selectivity and thus are able to efficiently represent highly anisotropic elements such as line segments and edges. Ridgelet transform is a kind of directional multi-resolution transform and has been successful in many image processing and texture analysis applications. The objective of this research is to develop multi-ridgelet transform by applying multiwavelet transform to the Radon transform so as to attain attractive improvements. By adapting the cardinal orthogonal multiwavelets to the ridgelet transform, it is shown that the proposed cardinal multiridgelet transform (CMRT) possesses cardinality, approximate translation invariance, and approximate rotation invariance simultaneously, whereas no single ridgelet transform can hold all these properties at the same time. These properties are beneficial to image texture analysis. This is demonstrated in three studies of texture analysis applications. Firstly a texture database retrieval study taking a portion of the Brodatz texture album as an example has demonstrated that the CMRT-based texture representation for database retrieval performed better than other directional wavelet methods. Secondly the study of the LCD mura defect detection was based upon the classification of simulated abnormalities with a linear support vector machine classifier, the CMRT-based analysis of defects were shown to provide efficient features for superior detection performance than other competitive methods. Lastly and the most importantly, a study on the prostate cancer tissue image classification was conducted. With the CMRT-based texture extraction, Gaussian kernel support vector machines have been developed to discriminate prostate cancer Gleason grade 3 versus grade 4. Based on a limited database of prostate specimens, one classifier was trained to have remarkable test performance. This approach is unquestionably promising and is worthy to be fully developed

    Directional multiresolution image representations

    Get PDF
    Efficient representation of visual information lies at the foundation of many image processing tasks, including compression, filtering, and feature extraction. Efficiency of a representation refers to the ability to capture significant information of an object of interest in a small description. For practical applications, this representation has to be realized by structured transforms and fast algorithms. Recently, it has become evident that commonly used separable transforms (such as wavelets) are not necessarily best suited for images. Thus, there is a strong motivation to search for more powerful schemes that can capture the intrinsic geometrical structure of pictorial information. This thesis focuses on the development of new "true" two-dimensional representations for images. The emphasis is on the discrete framework that can lead to algorithmic implementations. The first method constructs multiresolution, local and directional image expansions by using non-separable filter banks. This discrete transform is developed in connection with the continuous-space curvelet construction in harmonic analysis. As a result, the proposed transform provides an efficient representation for two-dimensional piecewise smooth signals that resemble images. The link between the developed filter banks and the continuous-space constructions is set up in a newly defined directional multiresolution analysis. The second method constructs a new family of block directional and orthonormal transforms based on the ridgelet idea, and thus offers an efficient representation for images that are smooth away from straight edges. Finally, directional multiresolution image representations are employed together with statistical modeling, leading to powerful texture models and successful image retrieval systems

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Impact of Feature Representation on Remote Sensing Image Retrieval

    Get PDF
    Remote sensing images are acquired using special platforms, sensors and are classified as aerial, multispectral and hyperspectral images. Multispectral and hyperspectral images are represented using large spectral vectors as compared to normal Red, Green, Blue (RGB) images. Hence, remote sensing image retrieval process from large archives is a challenging task.  Remote sensing image retrieval mainly consist of feature representation as first step and finding out similar images to a query image as second step. Feature representation plays important part in the performance of remote sensing image retrieval process. Research work focuses on impact of feature representation of remote sensing images on the performance of remote sensing image retrieval. This study shows that more discriminative features of remote sensing images are needed to improve performance of remote sensing image retrieval process
    corecore