2 research outputs found

    Spatially Localized Visual Dictionary Learning

    Get PDF
    International audienceThis paper addresses the challenge of devising new representation learning algorithms that overcome the lack of interpretability of classical visual models. Therefore, it introduces a new recursive visual patch selection technique built on top of a Shared Nearest Neighbors embedding method. The main contribution of the paper is to drastically reduce the high-dimensionality of such over-complete representation thanks to a recursive feature elimination method. We show that the number of spatial atoms of the representation can be reduced by up to two orders of magnitude without much degrading the encoded information. The resulting representations are shown to provide competitive image classification performance with the state-of-the-art while enabling to learn highly interpretable visual models

    Iterative Random Visual Word Selection

    No full text
    International audienceIn content based image retrieval, one of the most important step is the construction of image signatures. To do so, a part of state-of-the-art approaches propose to build a visual vocabulary. In this paper, we propose a new methodology for visual vocabulary construction that obtains high retrieval results. Moreover, it is computationally inexpensive to build and needs no prior knowledge on features or dataset used.Classically, the vocabulary is built by aggregating a certain number of features in centroids using a clustering algorithm. The final centroids are assimilated to visual "words". Our approach for building a visual vocabulary is based on an iterative random visual word selection mixing a saliency map and tf-idf scheme. Experiment results show that it outperforms the original "Bag of visual words" based approach in efficiency and effectiveness
    corecore