1,549 research outputs found
Assisted Viewpoint Interaction for 3D Visualization
Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences
Robust Continuous System Integration for Critical Deep-Sea Robot Operations Using Knowledge-Enabled Simulation in the Loop
Deep-sea robot operations demand a high level of safety, efficiency and
reliability. As a consequence, measures within the development stage have to be
implemented to extensively evaluate and benchmark system components ranging
from data acquisition, perception and localization to control. We present an
approach based on high-fidelity simulation that embeds spatial and
environmental conditions from recorded real-world data. This simulation in the
loop (SIL) methodology allows for mitigating the discrepancy between simulation
and real-world conditions, e.g. regarding sensor noise. As a result, this work
provides a platform to thoroughly investigate and benchmark behaviors of system
components concurrently under real and simulated conditions. The conducted
evaluation shows the benefit of the proposed work in tasks related to
perception and self-localization under changing spatial and environmental
conditions.Comment: published on IROS 201
A Robust Localization System for Inspection Robots in Sewer Networks â€
Sewers represent a very important infrastructure of cities whose state should be monitored
periodically. However, the length of such infrastructure prevents sensor networks from being
applicable. In this paper, we present a mobile platform (SIAR) designed to inspect the sewer network.
It is capable of sensing gas concentrations and detecting failures in the network such as cracks and
holes in the floor and walls or zones were the water is not flowing. These alarms should be precisely
geo-localized to allow the operators performing the required correcting measures. To this end, this
paper presents a robust localization system for global pose estimation on sewers. It makes use of prior
information of the sewer network, including its topology, the different cross sections traversed and
the position of some elements such as manholes. The system is based on a Monte Carlo Localization
system that fuses wheel and RGB-D odometry for the prediction stage. The update step takes into
account the sewer network topology for discarding wrong hypotheses. Additionally, the localization
is further refined with novel updating steps proposed in this paper which are activated whenever
a discrete element in the sewer network is detected or the relative orientation of the robot over the
sewer gallery could be estimated. Each part of the system has been validated with real data obtained
from the sewers of Barcelona. The whole system is able to obtain median localization errors in the
order of one meter in all cases. Finally, the paper also includes comparisons with state-of-the-art
Simultaneous Localization and Mapping (SLAM) systems that demonstrate the convenience of the
approach.Unión Europea ECHORD ++ 601116Ministerio de Ciencia, Innovación y Universidades de España RTI2018-100847-B-C2
Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets
In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems.
To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness.
To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance
SUAVE: Integrating UAV video using a 3D model
Controlling an unmanned aerial vehicle (UAV) requires the operator to perform continuous surveillance and path planning. The operator's situation awareness degrades as an increasing number of surveillance videos must be viewed and integrated. The Picture-in-Picture display (PiP) provides a solution for integrating multiple UAV camera video by allowing the operator to view the video feed in the context of surrounding terrain. The experimental SUAVE (Simple Unmanned Aerial Vehicle Environment) display extends PiP methods by sampling imagery from the video stream to texture a 3D map of the terrain. The operator can then inspect this imagery using world in miniature (WIM) or fly-through methods. We investigate the properties and advantages of SUAVE in the context of a search mission with 3 UAVs
- …