3,185 research outputs found

    Visual Search at eBay

    Full text link
    In this paper, we propose a novel end-to-end approach for scalable visual search infrastructure. We discuss the challenges we faced for a massive volatile inventory like at eBay and present our solution to overcome those. We harness the availability of large image collection of eBay listings and state-of-the-art deep learning techniques to perform visual search at scale. Supervised approach for optimized search limited to top predicted categories and also for compact binary signature are key to scale up without compromising accuracy and precision. Both use a common deep neural network requiring only a single forward inference. The system architecture is presented with in-depth discussions of its basic components and optimizations for a trade-off between search relevance and latency. This solution is currently deployed in a distributed cloud infrastructure and fuels visual search in eBay ShopBot and Close5. We show benchmark on ImageNet dataset on which our approach is faster and more accurate than several unsupervised baselines. We share our learnings with the hope that visual search becomes a first class citizen for all large scale search engines rather than an afterthought.Comment: To appear in 23rd SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2017. A demonstration video can be found at https://youtu.be/iYtjs32vh4

    Design and Implementation of a Multimedia Information Retrieval Engine for the MSR-Bing Image Retrieval Challenge

    Get PDF
    The aim of this work is to design and implement a multimedia information retrieval engine for the MSR-Bing Retrieval Challenge provided by Microsoft. The challenge is based on the Clickture dataset, generated from click logs of Bing image search. The system has to predict the relevance of images with respect to text queries, by associating a score to a pair (image, text query) that indicates how the text query is good at describing the image content. We attempt to combine textual and visual information, by performing text-based and content-based image retrieval. The framework used to extract visual features is Caffe, an efficient implementation of deep Convolutional Neural Network(CNN). Decision is taken using a knowledge base containing triplets each consisting of a text query, an image, and the number of times that a users clicked on the image, in correspondence of the text query. Two strategies were proposed. In one case we analyse the intersection among the riplets elements retrieved respectively using the textual query and the image itself. In the other case we analyse the union. To solve efficiency issues we proposed an approach that index visual features using Apache Lucene, that is a text search engine library written entirely in Java, suitable for nearly any application requiring full-text search abilities. To this aim, we have converted image features into a textual form, to index them into an inverted index by means of Lucene. In this way we were able to set up a robust retrieval system that combines full-text search with content-based image retrieval capabilities. To prove that our search of textually and visually similar images really works, a small web-based prototype has been implemented. We evaluated different versions of our system over the development set in order to evaluate the measures of similarity to compare images, and to assess the best sorting strategy. Finally, our proposed approaches have been compared with those implemented by the winners of previous challenge editions

    Document image classification combining textual and visual features.

    Get PDF
    This research contributes to the problem of classifying document images. The main addition of this thesis is the exploitation of textual and visual features through an approach that uses Convolutional Neural Networks. The study uses a combination of Optical Character Recognition and Natural Language Processing algorithms to extract and manipulate relevant text concepts from document images. Such content information are embedded within document images, with the aim of adding elements which help to improve the classification results of a Convolutional Neural Network. The experimental phase proves that the overall document classification accuracy of a Convolutional Neural Network trained using these text-augmented document images, is considerably higher than the one achieved by a similar model trained solely on classic document images, especially when different classes of documents share similar visual characteristics. The comparison between our method and state-of-the-art approaches demonstrates the effectiveness of combining visual and textual features. Although this thesis is about document image classification, the idea of using textual and visual features is not restricted to this context and comes from the observation that textual and visual information are complementary and synergetic in many aspects

    Web Scale Image Retrieval based on Image Text Query Pair and Click Data

    Get PDF
    The growing importance of traditional text-based image retrieval is due to its popularity through web image search engines. Google, Yahoo, Bing etc. are some of search engines that use this technique. Text-based image retrieval is based on the assumption that surrounding text describes the image. For text-based image retrieval systems, input is a text query and output is a ranking set of images in which most relevant results appear first. The limitation of text-based image retrieval is that most of the times query text is not able to describe the content of the image perfectly since visual information is full of variety. Microsoft Research Bing Image retrieval Challenge aims to achieve cross-modal retrieval by ranking the relevance of the query text terms and the images. This thesis addresses the approaches of our team MUVIS for Microsoft research Bing image retrieval challenge to measure the relevance of web images and the query given in text form. This challenge is to develop an image-query pair scoring system to assess the effectiveness of query terms in describing the images. The provided dataset included a training set containing more than 23 million clicked image-query pairs collected from the web (One year). Also, a development set was collected which had been manually labelled. On each image-query pair, a floating-point score was produced. The floating-point score reflected the relevancy of the query to describe the given image, with higher number including higher relevance and vice versa. Sorting its corresponding score for all its associated images produced the retrieval ranking for the images of any query. The system developed by MUVIS team consisted of five modules. Two main modules were text processing module and principal component analysis assisted perceptron regression with random sub-space selection. To enhance evaluation accuracy, three complementary modules i.e. face bank, duplicate image detector and optical character recognition were also developed. Both main module and complementary modules relied on results returned by text processing module. OverFeat features extracted over text processing module results acted as input for principal component analysis assisted perceptron regression with random sub-space selection module which further transformed the features vector. The relevance score for each query-image pair was achieved by comparing the feature of the query image and the relevant training images. For features extraction, used in the face bank and duplicate image detector modules, we used CMUVIS framework. CMUVIS framework is a distributed computing framework for big data developed by the MUVIS group. Three runs were submitted for evaluation: “Master”, “Sub2”, and “Sub3”. The cumulative similarity was returned as the requested images relevance. Using the proposed approach we reached the value of 0.5099 in terms of discounted cumulative gain on the development set. On the test set we gained 0.5116. Our solution achieved fourth place in Microsoft Research Bing grand challenge 2014 for master submission and second place for overall submission

    MSMG-Net: Multi-scale Multi-grained Supervised Metworks for Multi-task Image Manipulation Detection and Localization

    Full text link
    With the rapid advances of image editing techniques in recent years, image manipulation detection has attracted considerable attention since the increasing security risks posed by tampered images. To address these challenges, a novel multi-scale multi-grained deep network (MSMG-Net) is proposed to automatically identify manipulated regions. In our MSMG-Net, a parallel multi-scale feature extraction structure is used to extract multi-scale features. Then the multi-grained feature learning is utilized to perceive object-level semantics relation of multi-scale features by introducing the shunted self-attention. To fuse multi-scale multi-grained features, global and local feature fusion block are designed for manipulated region segmentation by a bottom-up approach and multi-level feature aggregation block is designed for edge artifacts detection by a top-down approach. Thus, MSMG-Net can effectively perceive the object-level semantics and encode the edge artifact. Experimental results on five benchmark datasets justify the superior performance of the proposed method, outperforming state-of-the-art manipulation detection and localization methods. Extensive ablation experiments and feature visualization demonstrate the multi-scale multi-grained learning can present effective visual representations of manipulated regions. In addition, MSMG-Net shows better robustness when various post-processing methods further manipulate images

    Adaptive Superpixel for Active Learning in Semantic Segmentation

    Full text link
    Learning semantic segmentation requires pixel-wise annotations, which can be time-consuming and expensive. To reduce the annotation cost, we propose a superpixel-based active learning (AL) framework, which collects a dominant label per superpixel instead. To be specific, it consists of adaptive superpixel and sieving mechanisms, fully dedicated to AL. At each round of AL, we adaptively merge neighboring pixels of similar learned features into superpixels. We then query a selected subset of these superpixels using an acquisition function assuming no uniform superpixel size. This approach is more efficient than existing methods, which rely only on innate features such as RGB color and assume uniform superpixel sizes. Obtaining a dominant label per superpixel drastically reduces annotators' burden as it requires fewer clicks. However, it inevitably introduces noisy annotations due to mismatches between superpixel and ground truth segmentation. To address this issue, we further devise a sieving mechanism that identifies and excludes potentially noisy annotations from learning. Our experiments on both Cityscapes and PASCAL VOC datasets demonstrate the efficacy of adaptive superpixel and sieving mechanisms
    corecore