203 research outputs found

    A novel coarse-to-fine remote sensing image retrieval system in JPEG-2000 compressed domain

    Get PDF
    Copyright 2018 Society of Photo‑Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.This paper presents a novel content-based image search and retrieval (CBIR) system that achieves coarse to fine remote sensing (RS) image description and retrieval in JPEG 2000 compressed domain. The proposed system initially: i) decodes the code-streams associated to the coarse (i.e., the lowest) wavelet resolution, and ii) discards the most irrelevant images to the query image that are selected based on the similarities estimated among the coarse resolution features of the query image and those of the archive images. Then, the code-streams associated to the sub-sequent resolution of the remaining images in the archive are decoded and the most irrelevant images are selected by considering the features associated to both resolutions. This is achieved by estimating the similarities between the query image and remaining images by giving higher weights to the features associated to the finer resolution while assigning lower weights to those related to the coarse resolution. To this end, the pyramid match kernel similarity measure is exploited. These processes are iterated until the code-streams associated to the highest wavelet resolution are decoded only for a very small set of images. By this way, the proposed system exploits a multiresolution and hierarchical feature space and accomplish an adaptive RS CBIR with significantly reduced retrieval time. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed system in terms of retrieval accuracy and time when compared to the standard CBIR systems

    MULTIRIDGELETS FOR TEXTURE ANALYSIS

    Get PDF
    Directional wavelets have orientation selectivity and thus are able to efficiently represent highly anisotropic elements such as line segments and edges. Ridgelet transform is a kind of directional multi-resolution transform and has been successful in many image processing and texture analysis applications. The objective of this research is to develop multi-ridgelet transform by applying multiwavelet transform to the Radon transform so as to attain attractive improvements. By adapting the cardinal orthogonal multiwavelets to the ridgelet transform, it is shown that the proposed cardinal multiridgelet transform (CMRT) possesses cardinality, approximate translation invariance, and approximate rotation invariance simultaneously, whereas no single ridgelet transform can hold all these properties at the same time. These properties are beneficial to image texture analysis. This is demonstrated in three studies of texture analysis applications. Firstly a texture database retrieval study taking a portion of the Brodatz texture album as an example has demonstrated that the CMRT-based texture representation for database retrieval performed better than other directional wavelet methods. Secondly the study of the LCD mura defect detection was based upon the classification of simulated abnormalities with a linear support vector machine classifier, the CMRT-based analysis of defects were shown to provide efficient features for superior detection performance than other competitive methods. Lastly and the most importantly, a study on the prostate cancer tissue image classification was conducted. With the CMRT-based texture extraction, Gaussian kernel support vector machines have been developed to discriminate prostate cancer Gleason grade 3 versus grade 4. Based on a limited database of prostate specimens, one classifier was trained to have remarkable test performance. This approach is unquestionably promising and is worthy to be fully developed

    Anisotropic multiresolution analyses for deepfake detection

    Full text link
    Generative Adversarial Networks (GANs) have paved the path towards entirely new media generation capabilities at the forefront of image, video, and audio synthesis. However, they can also be misused and abused to fabricate elaborate lies, capable of stirring up the public debate. The threat posed by GANs has sparked the need to discern between genuine content and fabricated one. Previous studies have tackled this task by using classical machine learning techniques, such as k-nearest neighbours and eigenfaces, which unfortunately did not prove very effective. Subsequent methods have focused on leveraging on frequency decompositions, i.e., discrete cosine transform, wavelets, and wavelet packets, to preprocess the input features for classifiers. However, existing approaches only rely on isotropic transformations. We argue that, since GANs primarily utilize isotropic convolutions to generate their output, they leave clear traces, their fingerprint, in the coefficient distribution on sub-bands extracted by anisotropic transformations. We employ the fully separable wavelet transform and multiwavelets to obtain the anisotropic features to feed to standard CNN classifiers. Lastly, we find the fully separable transform capable of improving the state-of-the-art

    Features for Cross Spectral Image Matching: A Survey

    Get PDF
    In recent years, cross spectral matching has been gaining attention in various biometric systems for identification and verification purposes. Cross spectral matching allows images taken under different electromagnetic spectrums to match each other. In cross spectral matching, one of the keys for successful matching is determined by the features used for representing an image. Therefore, the feature extraction step becomes an essential task. Researchers have improved matching accuracy by developing robust features. This paper presents most commonly selected features used in cross spectral matching. This survey covers basic concepts of cross spectral matching, visual and thermal features extraction, and state of the art descriptors. In the end, this paper provides a description of better feature selection methods in cross spectral matching

    Texture analysis using volume-radius fractal dimension

    Full text link
    Texture plays an important role in computer vision. It is one of the most important visual attributes used in image analysis, once it provides information about pixel organization at different regions of the image. This paper presents a novel approach for texture characterization, based on complexity analysis. The proposed approach expands the idea of the Mass-radius fractal dimension, a method originally developed for shape analysis, to a set of coordinates in 3D-space that represents the texture under analysis in a signature able to characterize efficiently different texture classes in terms of complexity. An experiment using images from the Brodatz album illustrates the method performance.Comment: 4 pages, 4 figure

    Remote-Sensing Image Scene Classification With Deep Neural Networks in JPEG 2000 Compressed Domain

    Get PDF
    To reduce the storage requirements, remote-sensing (RS) images are usually stored in compressed format. Existing scene classification approaches using deep neural networks (DNNs) require to fully decompress the images, which is a computationally demanding task in operational applications. To address this issue, in this article, we propose a novel approach to achieve scene classification in Joint Photographic Experts Group (JPEG) 2000 compressed RS images. The proposed approach consists of two main steps: 1) approximation of the finer resolution subbands of reversible biorthogonal wavelet filters used in JPEG 2000 and 2) characterization of the high-level semantic content of approximated wavelet subbands and scene classification based on the learned descriptors. This is achieved by taking codestreams associated with the coarsest resolution wavelet subband as input to approximate finer resolution subbands using a number of transposed convolutional layers. Then, a series of convolutional layers models the high-level semantic content of the approximated wavelet subband. Thus, the proposed approach models the multiresolution paradigm given in the JPEG 2000 compression algorithm in an end-to-end trainable unified neural network. In the classification stage, the proposed approach takes only the coarsest resolution wavelet subbands as input, thereby reducing the time required to apply decoding. Experimental results performed on two benchmark aerial image archives demonstrate that the proposed approach significantly reduces the computational time with similar classification accuracies when compared with traditional RS scene classification approaches (which requires full image decompression).EC/H2020/759764/EU/Accurate and Scalable Processing of Big Data in Earth Observation/BigEart
    • …
    corecore