12 research outputs found

    Prosemantic features for content-based image retrieval

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-18449-9_8Revised Selected Papers of 7th International Workshop, AMR 2009, Madrid, Spain, September 24-25, 2009We present here, an image description approach based on prosemantic features. The images are represented by a set of low-level features related to their structure and color distribution. Those descriptions are fed to a battery of image classifiers trained to evaluate the membership of the images with respect to a set of 14 overlapping classes. Prosemantic features are obtained by packing together the scores. To verify the effectiveness of the approach, we designed a target search experiment in which both low-level and prosemantic features are embedded into a content-based image retrieval system exploiting relevance feedback. The experiments show that the use of prosemantic features allows for a more successful and quick retrieval of the query images

    Searching through photographic databases with QuickLook

    Full text link
    G. Ciocca, C. Cusano, R. Schettini, S. Santini, A. de Polo, F. Tavanti, “Searching through photographic databases with QuickLook”. Proc. Multimedia on Mobile Devices 2012; and Multimedia Content Access: Algorithms and Systems VI. Ed- Reiner Creutzburg; David Akopian; Cees G. M. Snoek; Nicu Sebe; Lyndon Kennedy. 8304. 83040V-1 (2012). Copyright 2012 Society of Photo‑Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.We present here the results obtained by including a new image descriptor, that we called prosemantic feature vector, within the framework of QuickLook2 image retrieval system. By coupling the prosemantic features and the relevance feedback mechanism provided by QuickLook2, the user can move in a more rapid and precise way through the feature space toward the intended goal. The prosemantic features are obtained by a two-step feature extraction process. At the first step, low level features related to image structure and color distribution are extracted from the images. At the second step, these features are used as input to a bank of classifiers, each one trained to recognize a given semantic category, to produce score vectors. We evaluated the efficacy of the prosemantic features under search tasks on a dataset provided by Fratelli Alinari Photo Archive.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    Spatially organized visualization of image query results

    Full text link
    Gianluigi Ciocca, Claudio Cusano, Simone Santini, Raimondo Schettini, "Spatially organized visualization of image query results", Proceedings of SPIE 7881, Multimedia on Mobile Devices 2011; and Multimedia Content Access: Algorithms and Systems V. Ed. David Akopian, Reiner Creutzburg, Cees G. M. Snoek, Nicu Sebe, Lyndon Kennedy, SPIE (2011). Copyright 2011 Society of Photo‑Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.In this work we present a system which visualizes the results obtained from image search engines in such a way that users can conveniently browse the retrieved images. The way in which search results are presented allows the user to grasp the composition of the set of images "at a glance". To do so, images are grouped and positioned according to their distribution in a prosemantic feature space which encodes information about their content at an abstraction level that can be placed between visual and semantic information. The compactness of the feature space allows a fast analysis of the image distribution so that all the computation can be performed in real time

    Unsupervised classemes

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-33885-4_41Proceedings of Information Fusion in Computer Vision for Concept Recognition at the ECCV 2012In this paper we present a new model of semantic features that, unlike previously presented methods, does not rely on the presence of a labeled training data base, as the creation of the feature extraction function is done in an unsupervised manner. We test these features on an unsupervised classification (clustering) task, and show that they outperform primitive (low-level) features, and that have performance comparable to that of supervised semantic features, which are much more expensive to determine relying on the presence of a labeled training set to train the feature extraction function

    Complex Event Recognition from Images with Few Training Examples

    Full text link
    We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event sub-types, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pretrained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a dataset with unseen event categories using a single training example.Comment: Accepted to Winter Applications of Computer Vision (WACV'17

    Understanding User Intentions in Vertical Image Search

    Get PDF
    With the development of Internet and Web 2.0, large volume of multimedia contents have been made online. It is highly desired to provide easy accessibility to such contents, i.e. efficient and precise retrieval of images that satisfies users' needs. Towards this goal, content-based image retrieval (CBIR) has been intensively studied in the research community, while text-based search is better adopted in the industry. Both approaches have inherent disadvantages and limitations. Therefore, unlike the great success of text search, Web image search engines are still premature. In this thesis, we present iLike, a vertical image search engine which integrates both textual and visual features to improve retrieval performance. We bridge the semantic gap by capturing the meaning of each text term in the visual feature space, and re-weight visual features according to their significance to the query terms. We also bridge the user intention gap since we are able to infer the "visual meanings" behind the textual queries. Last but not least, we provide a visual thesaurus, which is generated from the statistical similarity between the visual space representation of textual terms. Experimental results show that our approach improves both precision and recall, compared with content-based or text-based image retrieval techniques. More importantly, search results from iLike are more consistent with users' perception of the query terms

    Implications of selective brain research for the philosophy of education.

    Get PDF
    New technology and techniques in research have enabled the neuroscientist to make some major advances in understanding the nature and operation of the brain. These new findings could have far-reaching effects upon many other areas of life, especially education. This dissertation is an inquiry into some of the neuroscientific research that could have a significant effect upon the philosophy and practice of education in general and the inclusion of the arts in education in particular. It supports the idea of a new area of specialization encompassing both education and neurology
    corecore