40,162 research outputs found

    Supporting aspect-based video browsing - analysis of a user study

    Get PDF
    In this paper, we present a novel video search interface based on the concept of aspect browsing. The proposed strategy is to assist the user in exploratory video search by actively suggesting new query terms and video shots. Our approach has the potential to narrow the "Semantic Gap" issue by allowing users to explore the data collection. First, we describe a clustering technique to identify potential aspects of a search. Then, we use the results to propose suggestions to the user to help them in their search task. Finally, we analyse this approach by exploiting the log files and the feedbacks of a user study

    Assessing Visualization Techniques for the Search Process in Digital Libraries

    Full text link
    In this paper we present an overview of several visualization techniques to support the search process in Digital Libraries (DLs). The search process typically can be separated into three major phases: query formulation and refinement, browsing through result lists and viewing and interacting with documents and their properties. We discuss a selection of popular visualization techniques that have been developed for the different phases to support the user during the search process. Along prototypes based on the different techniques we show how the approaches have been implemented. Although various visualizations have been developed in prototypical systems very few of these approaches have been adapted into today's DLs. We conclude that this is most likely due to the fact that most systems are not evaluated intensely in real-life scenarios with real information seekers and that results of the interesting visualization techniques are often not comparable. We can say that many of the assessed systems did not properly address the information need of cur-rent users.Comment: 23 pages, 14 figures, pre-print to appear in "Wissensorganisation mit digitalen Technologien" (deGruyter

    Aspect-based video browsing - a user study

    Get PDF
    In this paper, we present a user study on a novel video search interface based on the concept of aspect browsing. We aim to confirm whether automatically suggesting new aspects can increase the performance of an aspect-based browser. The proposed strategy is to assist the user in exploratory video search by actively suggesting new query terms and video shots. We use a clustering technique to identify potential aspects and use the results to propose suggestions to the user to help them in their search task. We evaluate this approach by analysing the users' perception and by exploiting the log files

    Learning to Hash-tag Videos with Tag2Vec

    Full text link
    User-given tags or labels are valuable resources for semantic understanding of visual media such as images and videos. Recently, a new type of labeling mechanism known as hash-tags have become increasingly popular on social media sites. In this paper, we study the problem of generating relevant and useful hash-tags for short video clips. Traditional data-driven approaches for tag enrichment and recommendation use direct visual similarity for label transfer and propagation. We attempt to learn a direct low-cost mapping from video to hash-tags using a two step training process. We first employ a natural language processing (NLP) technique, skip-gram models with neural network training to learn a low-dimensional vector representation of hash-tags (Tag2Vec) using a corpus of 10 million hash-tags. We then train an embedding function to map video features to the low-dimensional Tag2vec space. We learn this embedding for 29 categories of short video clips with hash-tags. A query video without any tag-information can then be directly mapped to the vector space of tags using the learned embedding and relevant tags can be found by performing a simple nearest-neighbor retrieval in the Tag2Vec space. We validate the relevance of the tags suggested by our system qualitatively and quantitatively with a user study

    LiveSketch: Query Perturbations for Guided Sketch-based Visual Search

    Get PDF
    LiveSketch is a novel algorithm for searching large image collections using hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch search by creating visual suggestions that augment the query as it is drawn, making query specification an iterative rather than one-shot process that helps disambiguate users' search intent. Our technical contributions are: a triplet convnet architecture that incorporates an RNN based variational autoencoder to search for images using vector (stroke-based) queries; real-time clustering to identify likely search intents (and so, targets within the search embedding); and the use of backpropagation from those targets to perturb the input stroke sequence, so suggesting alterations to the query in order to guide the search. We show improvements in accuracy and time-to-task over contemporary baselines using a 67M image corpus.Comment: Accepted to CVPR 201
    corecore