2,384 research outputs found

    Asynchronous displays for multi-UV search tasks

    Get PDF
    Synchronous video has long been the preferred mode for controlling remote robots with other modes such as asynchronous control only used when unavoidable as in the case of interplanetary robotics. We identify two basic problems for controlling multiple robots using synchronous displays: operator overload and information fusion. Synchronous displays from multiple robots can easily overwhelm an operator who must search video for targets. If targets are plentiful, the operator will likely miss targets that enter and leave unattended views while dealing with others that were noticed. The related fusion problem arises because robots' multiple fields of view may overlap forcing the operator to reconcile different views from different perspectives and form an awareness of the environment by "piecing them together". We have conducted a series of experiments investigating the suitability of asynchronous displays for multi-UV search. Our first experiments involved static panoramas in which operators selected locations at which robots halted and panned their camera to capture a record of what could be seen from that location. A subsequent experiment investigated the hypothesis that the relative performance of the panoramic display would improve as the number of robots was increased causing greater overload and fusion problems. In a subsequent Image Queue system we used automated path planning and also automated the selection of imagery for presentation by choosing a greedy selection of non-overlapping views. A fourth set of experiments used the SUAVE display, an asynchronous variant of the picture-in-picture technique for video from multiple UAVs. The panoramic displays which addressed only the overload problem led to performance similar to synchronous video while the Image Queue and SUAVE displays which addressed fusion as well led to improved performance on a number of measures. In this paper we will review our experiences in designing and testing asynchronous displays and discuss challenges to their use including tracking dynamic targets. © 2012 by the American Institute of Aeronautics and Astronautics, Inc

    Effects of automation on situation awareness in controlling robot teams

    Get PDF
    Declines in situation awareness (SA) often accompany automation. Some of these effects have been characterized as out-of-the-loop, complacency, and automation bias. Increasing autonomy in multi-robot control might be expected to produce similar declines in operators’ SA. In this paper we review a series of experiments in which automation is introduced in controlling robot teams. Automating path planning at a foraging task improved both target detection and localization which is closely tied to SA. Timing data, however, suggested small declines in SA for robot location and pose. Automation of image acquisition, by contrast, led to poorer localization. Findings are discussed and alternative explanations involving shifts in strategy proposed

    Asynchronous control with ATR for large robot teams

    Get PDF
    In this paper, we discuss and investigate the advantages of an asynchronous display, called "image queue", tested for an urban search and rescue foraging task. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment by selecting a small number of images that together cover large portions of the area searched. This asynchronous approach allows operators to search through a large amount of data gathered by autonomous robot teams, and allows comprehensive and scalable displays to obtain a network-centric perspective for unmanned ground vehicles (UGVs). In the reported experiment automatic target recognition (ATR) was used to augment utilities based on visual coverage in selecting imagery for presentation to the operator. In the cued condition a box was drawn in the region in which a possible target was detected. In the no-cue condition no box was drawn although the target detection probability continued to play a role in the selection of imagery. We found that operators using the image queue displays missed fewer victims and relied on teleoperation less often than those using streaming video. Image queue users in the no-cue condition did better in avoiding false alarms and reported lower workload than those in the cued condition. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Scalable target detection for large robot teams

    Get PDF
    In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operator's workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators. Copyright 2011 ACM

    Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    Get PDF
    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness. To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance

    Waterborne Autonomous Vehicle

    Get PDF
    This project designed and realized the Waterborne Autonomous VEhicle (WAVE), a submersible modular robotic platform to enable research on underwater technologies at WPI at minimal cost. WAVEÂ’s primary design objectives were modularity and expandability while adhering to the regulations for the international competition held by the Association for Underwater Vehicle Systems International. WAVEÂ’s core features include a six degree-of-freedom chassis, a modular electronic infrastructure, and an easily configurable software framework

    Waterborne Autonomous VEhicle

    Get PDF
    This project designed and realized the Waterborne Autonomous VEhicle (WAVE), a submersible modular robotic platform to enable research on underwater technologies at WPI at minimal cost. WAVE’s primary design objectives were modularity and expandability while adhering to the regulations for the international competition held by the Association for Unmanned Vehicle Systems International. WAVE’s core features include a six degree-of-freedom chassis, a modular electronic infrastructure, and an easily configurable software framework

    Waterborne Autonomous VEhicle

    Get PDF
    This project designed and realized the Waterborne Autonomous VEhicle (WAVE), a submersible modular robotic platform to enable research on underwater technologies at WPI at minimal cost. WAVEÂ’s primary design objectives were modularity and expandability while adhering to the regulations for the international competition held by the Association for Unmanned Vehicle Systems International. WAVEÂ’s core features include a six degree-of-freedom chassis, a modular electronic infrastructure, and an easily configurable software framework
    • …
    corecore