2,895 research outputs found

    Asynchronous displays for multi-UV search tasks

    Get PDF
    Synchronous video has long been the preferred mode for controlling remote robots with other modes such as asynchronous control only used when unavoidable as in the case of interplanetary robotics. We identify two basic problems for controlling multiple robots using synchronous displays: operator overload and information fusion. Synchronous displays from multiple robots can easily overwhelm an operator who must search video for targets. If targets are plentiful, the operator will likely miss targets that enter and leave unattended views while dealing with others that were noticed. The related fusion problem arises because robots' multiple fields of view may overlap forcing the operator to reconcile different views from different perspectives and form an awareness of the environment by "piecing them together". We have conducted a series of experiments investigating the suitability of asynchronous displays for multi-UV search. Our first experiments involved static panoramas in which operators selected locations at which robots halted and panned their camera to capture a record of what could be seen from that location. A subsequent experiment investigated the hypothesis that the relative performance of the panoramic display would improve as the number of robots was increased causing greater overload and fusion problems. In a subsequent Image Queue system we used automated path planning and also automated the selection of imagery for presentation by choosing a greedy selection of non-overlapping views. A fourth set of experiments used the SUAVE display, an asynchronous variant of the picture-in-picture technique for video from multiple UAVs. The panoramic displays which addressed only the overload problem led to performance similar to synchronous video while the Image Queue and SUAVE displays which addressed fusion as well led to improved performance on a number of measures. In this paper we will review our experiences in designing and testing asynchronous displays and discuss challenges to their use including tracking dynamic targets. © 2012 by the American Institute of Aeronautics and Astronautics, Inc

    Asynchronous control with ATR for large robot teams

    Get PDF
    In this paper, we discuss and investigate the advantages of an asynchronous display, called "image queue", tested for an urban search and rescue foraging task. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment by selecting a small number of images that together cover large portions of the area searched. This asynchronous approach allows operators to search through a large amount of data gathered by autonomous robot teams, and allows comprehensive and scalable displays to obtain a network-centric perspective for unmanned ground vehicles (UGVs). In the reported experiment automatic target recognition (ATR) was used to augment utilities based on visual coverage in selecting imagery for presentation to the operator. In the cued condition a box was drawn in the region in which a possible target was detected. In the no-cue condition no box was drawn although the target detection probability continued to play a role in the selection of imagery. We found that operators using the image queue displays missed fewer victims and relied on teleoperation less often than those using streaming video. Image queue users in the no-cue condition did better in avoiding false alarms and reported lower workload than those in the cued condition. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Scalable target detection for large robot teams

    Get PDF
    In this paper, we present an asynchronous display method, coined image queue, which allows operators to search through a large amount of data gathered by autonomous robot teams. We discuss and investigate the advantages of an asynchronous display for foraging tasks with emphasis on Urban Search and Rescue. The image queue approach mines video data to present the operator with a relevant and comprehensive view of the environment in order to identify targets of interest such as injured victims. It fills the gap for comprehensive and scalable displays to obtain a network-centric perspective for UGVs. We compared the image queue to a traditional synchronous display with live video feeds and found that the image queue reduces errors and operator's workload. Furthermore, it disentangles target detection from concurrent system operations and enables a call center approach to target detection. With such an approach we can scale up to very large multi-robot systems gathering huge amounts of data that is then distributed to multiple operators. Copyright 2011 ACM

    Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    Get PDF
    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future safety-critical situations and enhance time-critical decision-making missions in dynamic environments, and to support the easy and effective managing, browsing, and searching of spatiotemporal data in a dynamic environment, we propose an asynchronous, scalable, and comprehensive spatiotemporal data organization, display, and interaction method that allows operators to navigate through spatiotemporal information rather than through the environments being examined, and to maintain all necessary global and local situation awareness. To empirically prove the viability of our approach, we developed the Event-Lens system, which generates asynchronous prioritized images to provide the operator with a manageable, comprehensive view of the information that is collected by multiple sensors. The user study and interaction mode experiments were designed and conducted. The Event-Lens system was discovered to have a consistent advantage in multiple moving-target marking-task performance measures. It was also found that participants’ attentional control, spatial ability, and action video gaming experience affected their overall performance

    Aerospace Medicine and Biology. A continuing bibliography with indexes

    Get PDF
    This bibliography lists 244 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1981. Aerospace medicine and aerobiology topics are included. Listings for physiological factors, astronaut performance, control theory, artificial intelligence, and cybernetics are included

    SEAPAK user's guide, version 2.0. Volume 1: System description

    Get PDF
    The SEAPAK is a user interactive satellite data analysis package that was developed for the processing and interpretation of Nimbus-7/Coastal Zone Color Scanner (CZCS) and the NOAA Advanced Very High Resolution Radiometer (AVHRR) data. Significant revisions were made to version 1.0 of the guide, and the ancillary environmental data analysis module was expanded. The package continues to emphasize user friendliness and user interactive data analyses. Additionally, because the scientific goals of the ocean color research being conducted have shifted to large space and time scales, batch processing capabilities for both satellite and ancillary environmental data analyses were enhanced, thus allowing large quantities of data to be ingested and analyzed in background

    Operator/equipment Performance Measures: Results Of Literature Search

    Get PDF
    Literature review focuses on topics concerning perception, acceptable transmission delay, fidelity, and visual systems, including resolution, field of view, and target-background contrast

    Proceedings of Abstracts Engineering and Computer Science Research Conference 2019

    Get PDF
    © 2019 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Note: Keynote: Fluorescence visualisation to evaluate effectiveness of personal protective equipment for infection control is © 2019 Crown copyright and so is licensed under the Open Government Licence v3.0. Under this licence users are permitted to copy, publish, distribute and transmit the Information; adapt the Information; exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. Where you do any of the above you must acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/This book is the record of abstracts submitted and accepted for presentation at the Inaugural Engineering and Computer Science Research Conference held 17th April 2019 at the University of Hertfordshire, Hatfield, UK. This conference is a local event aiming at bringing together the research students, staff and eminent external guests to celebrate Engineering and Computer Science Research at the University of Hertfordshire. The ECS Research Conference aims to showcase the broad landscape of research taking place in the School of Engineering and Computer Science. The 2019 conference was articulated around three topical cross-disciplinary themes: Make and Preserve the Future; Connect the People and Cities; and Protect and Care
    corecore