7,449 research outputs found
Explorative Study on Asymmetric Sketch Interactions for Object Retrieval in Virtual Reality
Drawing tools for Virtual Reality (VR) enable users to model 3D designs from within the virtual environment itself. These tools employ sketching and sculpting techniques known from desktop-based interfaces and apply them to hand-based controller interaction. While these techniques allow for mid-air sketching of basic shapes, it remains difficult for users to create detailed and comprehensive 3D models. Our work focuses on supporting the user in designing the virtual environment around them by enhancing sketch-based interfaces with a supporting system for interactive model retrieval. An immersed user can query a database containing detailed 3D models and replace them with the virtual environment through sketching. To understand supportive sketching within a virtual environment, we made an explorative comparison between asymmetric methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. Our work shows that different patterns emerge when users interact with 3D sketches rather than 2D sketches to compensate for different results from the retrieval system. In particular, the user adopts strategies when drawing on canvas of different sizes or using a physical device instead of a virtual canvas. While we pose our work as a retrieval problem for 3D models of chairs, our results can be extrapolated to other sketching tasks for virtual environments
Towards an All-Purpose Content-Based Multimedia Information Retrieval System
The growth of multimedia collections - in terms of size, heterogeneity, and
variety of media types - necessitates systems that are able to conjointly deal
with several forms of media, especially when it comes to searching for
particular objects. However, existing retrieval systems are organized in silos
and treat different media types separately. As a consequence, retrieval across
media types is either not supported at all or subject to major limitations. In
this paper, we present vitrivr, a content-based multimedia information
retrieval stack. As opposed to the keyword search approach implemented by most
media management systems, vitrivr makes direct use of the object's content to
facilitate different types of similarity search, such as Query-by-Example or
Query-by-Sketch, for and, most importantly, across different media types -
namely, images, audio, videos, and 3D models. Furthermore, we introduce a new
web-based user interface that enables easy-to-use, multimodal retrieval from
and browsing in mixed media collections. The effectiveness of vitrivr is shown
on the basis of a user study that involves different query and media types. To
the best of our knowledge, the full vitrivr stack is unique in that it is the
first multimedia retrieval system that seamlessly integrates support for four
different types of media. As such, it paves the way towards an all-purpose,
content-based multimedia information retrieval system
Creation of virtual worlds from 3D models retrieved from content aware networks based on sketch and image queries
The recent emergence of user generated content requires new content creation tools that will be both easy to learn and easy to use. These new tools should enable the user to construct new high-quality content with minimum effort; it is essential to allow existing multimedia content to be reused as building blocks when creating new content. In this work we present a new tool for automatically constructing virtual worlds with minimum user intervention. Users can create these worlds by drawing a simple sketch, or by using interactively segmented 2D objects from larger images. The system receives as a query the sketch or the segmented image, and uses it to find similar 3D models that are stored in a Content Centric Network. The user selects a suitable model from the retrieved models, and the system uses it to automatically construct a virtual 3D world
Deep Shape Matching
We cast shape matching as metric learning with convolutional networks. We
break the end-to-end process of image representation into two parts. Firstly,
well established efficient methods are chosen to turn the images into edge
maps. Secondly, the network is trained with edge maps of landmark images, which
are automatically obtained by a structure-from-motion pipeline. The learned
representation is evaluated on a range of different tasks, providing
improvements on challenging cases of domain generalization, generic
sketch-based image retrieval or its fine-grained counterpart. In contrast to
other methods that learn a different model per task, object category, or
domain, we use the same network throughout all our experiments, achieving
state-of-the-art results in multiple benchmarks.Comment: ECCV 201
Recommended from our members
A words-of-interest model of sketch representation for image retrieval
In this paper we propose a method for sketch-based image retrieval. Sketch is a magical medium which is capable of conveying semantic messages for user. It’s in accordance with user’s cognitive psychology to retrieve images with sketch. In order to narrow down the semantic gap between the user and the images in database, we preprocess all the images into sketches by the coherent line drawing algorithm. During the process of sketches extraction, saliency maps are used to filter out the redundant background information, while preserve the important semantic information. We use a variant of Words-of-Interest model to retrieve relevant images for the user according to the query. Words-of-Interest (WoI) model is based on Bag-ofvisual Words (BoW) model, which has been proven successfully for information retrieval. Bag-of-Words ignores the spatial relationships among visual words, which are important for sketch representation. Our method takes advantage of the spatial information of the query to select words of interest. Experimental results demonstrate that our sketch-based retrieval method achieves a good tradeoff between retrieval accuracy and semantic representation of users’ query
A semantic feature for human motion retrieval
With the explosive growth of motion capture data, it becomes very imperative in animation production to have an efficient search engine to retrieve motions from large motion repository. However, because of the high dimension of data space and complexity of matching methods, most of the existing approaches cannot return the result in real time. This paper proposes a high level semantic feature in a low dimensional space to represent the essential characteristic of different motion classes. On the basis of the statistic training of Gauss Mixture Model, this feature can effectively achieve motion matching on both global clip level and local frame level. Experiment results show that our approach can retrieve similar motions with rankings from large motion database in real-time and also can make motion annotation automatically on the fly. Copyright © 2013 John Wiley & Sons, Ltd
- …