170,344 research outputs found
Gas Source Localization Strategies for Teleoperated Mobile Robots. An Experimental Analysis
Gas source localization (GSL) is one of the most important and direct applications of a gas sensitive mobile robot, and consists in searching for one or multiple volatile
emission sources with a mobile robot that has improved sensing
capabilities (i.e. olfaction, wind flow, etc.). This work adresses GSL by employing a teleoperated mobile robot, and focuses on
which search strategy is the most suitable for this teleoperated approach. Four different search strategies, namely chemotaxis,
anemotaxis, gas-mapping, and visual-aided search, are analyzed
and evaluated according to a set of proposed indicators (e.g. accuracy,
efficiency, success rate, etc.) to determine the most suitable
one for a human-teleoperated mobile robot. Experimental validation is carried out employing a large dataset composed of over 150 trials where volunteer operators had to locate a gas-leak in a virtual environment under various and realistic environmental conditions (i.e. different wind flow patterns and gas source locations). We report different findings, from which we highlight that, against intuition, visual-aided search is not always the best strategy, but depends on the environmental conditions and the operator’s ability to understand how gas distributes.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
A Convolutional Neural Network-based Patent Image Retrieval Method for Design Ideation
The patent database is often used in searches of inspirational stimuli for
innovative design opportunities because of its large size, extensive variety
and rich design information in patent documents. However, most patent mining
research only focuses on textual information and ignores visual information.
Herein, we propose a convolutional neural network (CNN)-based patent image
retrieval method. The core of this approach is a novel neural network
architecture named Dual-VGG that is aimed to accomplish two tasks: visual
material type prediction and international patent classification (IPC) class
label prediction. In turn, the trained neural network provides the deep
features in the image embedding vectors that can be utilized for patent image
retrieval and visual mapping. The accuracy of both training tasks and patent
image embedding space are evaluated to show the performance of our model. This
approach is also illustrated in a case study of robot arm design retrieval.
Compared to traditional keyword-based searching and Google image searching, the
proposed method discovers more useful visual information for engineering
design.Comment: 11 pages, 11 figure
Open Cross-Domain Visual Search
This paper addresses cross-domain visual search, where visual queries
retrieve category samples from a different domain. For example, we may want to
sketch an airplane and retrieve photographs of airplanes. Despite considerable
progress, the search occurs in a closed setting between two pre-defined
domains. In this paper, we make the step towards an open setting where multiple
visual domains are available. This notably translates into a search between any
pair of domains, from a combination of domains or within multiple domains. We
introduce a simple -- yet effective -- approach. We formulate the search as a
mapping from every visual domain to a common semantic space, where categories
are represented by hyperspherical prototypes. Open cross-domain visual search
is then performed by searching in the common semantic space, regardless of
which domains are used as source or target. Domains are combined in the common
space to search from or within multiple domains simultaneously. A separate
training of every domain-specific mapping function enables an efficient scaling
to any number of domains without affecting the search performance. We
empirically illustrate our capability to perform open cross-domain visual
search in three different scenarios. Our approach is competitive with respect
to existing closed settings, where we obtain state-of-the-art results on
several benchmarks for three sketch-based search tasks.Comment: Accepted at Computer Vision and Image Understanding (CVIU
A content-based retrieval system for UAV-like video and associated metadata
In this paper we provide an overview of a content-based retrieval (CBR) system that has been specifically designed for handling UAV video and associated meta-data. Our emphasis in designing this system is on managing large quantities of such information and providing intuitive and efficient access mechanisms to this content, rather than on analysis of the video content. The retrieval unit in our system is termed a "trip". At capture time, each trip consists of an MPEG-1 video stream and a set of time stamped GPS locations. An analysis process automatically selects and associates GPS locations with the video timeline. The indexed trip is then stored in a shared trip repository. The repository forms the backend of a MPEG-211 compliant Web 2.0 application for subsequent querying, browsing, annotation and video playback. The system interface allows users to search/browse across the entire archive of trips and, depending on their access rights, to annotate other users' trips with additional information. Interaction with the CBR system is via a novel interactive map-based interface. This interface supports content access by time, date, region of interest on the map, previously annotated specific locations of interest and combinations of these. To develop such a system and investigate its practical usefulness in real world scenarios, clearly a significant amount of appropriate data is required. In the absence of a large volume of UAV data with which to work, we have simulated UAV-like data using GPS tagged video content captured from moving vehicles
How can heat maps of indexing vocabularies be utilized for information seeking purposes?
The ability to browse an information space in a structured way by exploiting
similarities and dissimilarities between information objects is crucial for
knowledge discovery. Knowledge maps use visualizations to gain insights into
the structure of large-scale information spaces, but are still far away from
being applicable for searching. The paper proposes a use case for enhancing
search term recommendations by heat map visualizations of co-word
relation-ships taken from indexing vocabulary. By contrasting areas of
different "heat" the user is enabled to indicate mainstream areas of the field
in question more easily.Comment: URL workshop proceedings: http://ceur-ws.org/Vol-1311
- …