4,754 research outputs found

    Further results on dissimilarity spaces for hyperspectral images RF-CBIR

    Full text link
    Content-Based Image Retrieval (CBIR) systems are powerful search tools in image databases that have been little applied to hyperspectral images. Relevance feedback (RF) is an iterative process that uses machine learning techniques and user's feedback to improve the CBIR systems performance. We pursued to expand previous research in hyperspectral CBIR systems built on dissimilarity functions defined either on spectral and spatial features extracted by spectral unmixing techniques, or on dictionaries extracted by dictionary-based compressors. These dissimilarity functions were not suitable for direct application in common machine learning techniques. We propose to use a RF general approach based on dissimilarity spaces which is more appropriate for the application of machine learning algorithms to the hyperspectral RF-CBIR. We validate the proposed RF method for hyperspectral CBIR systems over a real hyperspectral dataset.Comment: In Pattern Recognition Letters (2013

    Computing Information Quantity as Similarity Measure for Music Classification Task

    Full text link
    This paper proposes a novel method that can replace compression-based dissimilarity measure (CDM) in composer estimation task. The main features of the proposed method are clarity and scalability. First, since the proposed method is formalized by the information quantity, reproduction of the result is easier compared with the CDM method, where the result depends on a particular compression program. Second, the proposed method has a lower computational complexity in terms of the number of learning data compared with the CDM method. The number of correct results was compared with that of the CDM for the composer estimation task of five composers of 75 piano musical scores. The proposed method performed better than the CDM method that uses the file size compressed by a particular program.Comment: The 2017 International Conference On Advanced Informatics: Concepts, Theory And Application (ICAICTA2017

    A statistical reduced-reference method for color image quality assessment

    Full text link
    Although color is a fundamental feature of human visual perception, it has been largely unexplored in the reduced-reference (RR) image quality assessment (IQA) schemes. In this paper, we propose a natural scene statistic (NSS) method, which efficiently uses this information. It is based on the statistical deviation between the steerable pyramid coefficients of the reference color image and the degraded one. We propose and analyze the multivariate generalized Gaussian distribution (MGGD) to model the underlying statistics. In order to quantify the degradation, we develop and evaluate two measures based respectively on the Geodesic distance between two MGGDs and on the closed-form of the Kullback Leibler divergence. We performed an extensive evaluation of both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID 2008 benchmark and the FRTV Phase I validation process. Experimental results demonstrate the effectiveness of the proposed framework to achieve a good consistency with human visual perception. Furthermore, the best configuration is obtained with CIELAB color space associated to KLD deviation measure

    Preprocessing Solar Images while Preserving their Latent Structure

    Get PDF
    Telescopes such as the Atmospheric Imaging Assembly aboard the Solar Dynamics Observatory, a NASA satellite, collect massive streams of high resolution images of the Sun through multiple wavelength filters. Reconstructing pixel-by-pixel thermal properties based on these images can be framed as an ill-posed inverse problem with Poisson noise, but this reconstruction is computationally expensive and there is disagreement among researchers about what regularization or prior assumptions are most appropriate. This article presents an image segmentation framework for preprocessing such images in order to reduce the data volume while preserving as much thermal information as possible for later downstream analyses. The resulting segmented images reflect thermal properties but do not depend on solving the ill-posed inverse problem. This allows users to avoid the Poisson inverse problem altogether or to tackle it on each of \sim10 segments rather than on each of \sim107^7 pixels, reducing computing time by a factor of \sim106^6. We employ a parametric class of dissimilarities that can be expressed as cosine dissimilarity functions or Hellinger distances between nonlinearly transformed vectors of multi-passband observations in each pixel. We develop a decision theoretic framework for choosing the dissimilarity that minimizes the expected loss that arises when estimating identifiable thermal properties based on segmented images rather than on a pixel-by-pixel basis. We also examine the efficacy of different dissimilarities for recovering clusters in the underlying thermal properties. The expected losses are computed under scientifically motivated prior distributions. Two simulation studies guide our choices of dissimilarity function. We illustrate our method by segmenting images of a coronal hole observed on 26 February 2015

    Improving the robustness and reliability of population-based global biodiversity indicators

    Get PDF
    The current global biodiversity crisis is complicated by a data crisis. Reliable tools are needed to guide scientific research and conservation policy decisions, but the data underlying those tools is incomplete and biased. For example, the Living Planet Index (LPI) tracks the changing status of global vertebrate biodiversity, but gaps, biases and quality issues plague the aggregated data used to calculate trends. Unfortunately, we have little understanding of how reliable biodiversity indicators are. In this thesis I develop a suite of tools to assess and improve the reliability of trends in the LPI and similar indicators. First, I explore distance measures as a flexible toolset for comparing time series and trends. I test distance measures for properties related to time series comparisons and rate their relative sensitivities, then expand the results into a framework for choosing an appropriate distance measure for any time series comparison task in ecology. I use the framework to select an appropriate metric for determining trend accuracy. Second, I construct a model of trend reliability from accuracy measurements of sampled trend replicates calculated from artificially generated time series datasets. I apply the model to the LPI to reveal that the majority of trends need more data to be considered reliable, particularly across the global south, and for reptiles and amphibians everywhere. Finally, I develop a method to account for sampling error and serial correlation in confidence intervals of indicators that use aggregated abundance data from different sources. I show that the new method results in more robust and accurate confidence intervals across a wide range of dataset parameters, without reducing trend accuracy. I also apply the method to the LPI to reveal that the current method used by the LPI results in inaccurate and overly wide confidence intervals
    corecore