341 research outputs found

    Enabling Remote Responder Bio-Signal Monitoring in a Cooperative Human–Robot Architecture for Search and Rescue

    Get PDF
    The roles of emergency responders are challenging and often physically demanding, so it is essential that their duties are performed safely and effectively. In this article, we address real-time bio-signal sensor monitoring for responders in disaster scenarios. In particular, we propose the integration of a set of health monitoring sensors suitable for detecting stress, anxiety and physical fatigue in an Internet of Cooperative Agents architecture for search and rescue (SAR) missions (SAR-IoCA), which allows remote control and communication between human and robotic agents and the mission control center. With this purpose, we performed proof-of-concept experiments with a bio-signal sensor suite worn by firefighters in two high-fidelity SAR exercises. Moreover, we conducted a survey, distributed to end-users through the Fire Brigade consortium of the Provincial Council of Málaga, in order to analyze the firefighters’ opinion about biological signals monitoring while on duty. As a result of this methodology, we propose a wearable sensor suite design with the aim of providing some easy-to-wear integrated-sensor garments, which are suitable for emergency worker activity. The article offers discussion of user acceptance, performance results and learned lessons.This work has been partially funded by the Ministerio de Ciencia, Innovación y Universidades, Gobierno de España, projects RTI2018-093421-B-I00 and PID2021-122944OB-I00. Partial funding for open access charge: Universidad de Málag

    Gesture2Path: Imitation Learning for Gesture-aware Navigation

    Full text link
    As robots increasingly enter human-centered environments, they must not only be able to navigate safely around humans, but also adhere to complex social norms. Humans often rely on non-verbal communication through gestures and facial expressions when navigating around other people, especially in densely occupied spaces. Consequently, robots also need to be able to interpret gestures as part of solving social navigation tasks. To this end, we present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control. Gestures are interpreted based on a neural network that operates on streams of images, while we use a state-of-the-art model predictive control algorithm to solve point-to-point navigation tasks. We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios: left/right, follow me, and make a circle. Our experiments indicate that our method is able to successfully interpret complex human gestures and to use them as a signal to generate socially compliant trajectories for navigation tasks. We validated our method based on in-situ ratings of participants interacting with the robots.Comment: 8 pages, 12 figure

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    Applied Cognitive Sciences

    Get PDF
    Cognitive science is an interdisciplinary field in the study of the mind and intelligence. The term cognition refers to a variety of mental processes, including perception, problem solving, learning, decision making, language use, and emotional experience. The basis of the cognitive sciences is the contribution of philosophy and computing to the study of cognition. Computing is very important in the study of cognition because computer-aided research helps to develop mental processes, and computers are used to test scientific hypotheses about mental organization and functioning. This book provides a platform for reviewing these disciplines and presenting cognitive research as a separate discipline

    Mission Specialist Human-Robot Interaction in Micro Unmanned Aerial Systems

    Get PDF
    This research investigated the Mission Specialist role in micro unmanned aerial systems (mUAS) and was informed by human-robot interaction (HRI) and technology findings, resulting in the design of an interface that increased the individual performance of 26 untrained CBRN (chemical, biological, radiological, nuclear) responders during two field studies, and yielded formative observations for HRI in mUAS. Findings from the HRI literature suggested a Mission Specialist requires a role-specific interface that shares visual common ground with the Pilot role and allows active control of the unmanned aerial vehicle (UAV) payload camera. Current interaction technology prohibits this as responders view the same interface as the Pilot and give verbal directions for navigation and payload control. A review of interaction principles resulted in a synthesis of five design guidelines and a system architecture that were used to implement a Mission Specialist interface on an Apple iPad. The Shared Roles Model was used to model the mUAS human-robot team using three formal role descriptions synthesized from the literature (Flight Director, Pilot, and Mission Specialist). The Mission Specialist interface was evaluated through two separate field studies involving 26 CBRN experts who did not have mUAS experience. The studies consisted of 52 mission trials to surveil, evaluate, and capture imagery of a chemical train derailment incident staged at Disaster City. Results from the experimental study showed that when a Mission Specialist was able to actively control the UAV payload camera and verbally coordinate with the Pilot, greater role empowerment (confidence, comfort, and perceived best individual and team performance) was reported by a majority of participants for similar tasks; thus, a role-specific interface is preferred and should be used by untrained responders instead of viewing the same interface as the Pilot in mUAS. Formative observations made during this research suggested: i) establishing common ground in mUAS is both verbal and visual, ii) type of coordination (active or passive) preferred by the Mission Specialist is affected by command-level experience and perceived responsibility for the robot, and iii) a separate Pilot role is necessary regardless of preferred coordination type in mUAS. This research is of importance to HRI and CBRN researchers and practitioners, as well as those in the fields of robotics, human-computer interaction, and artificial intelligence, because it found that a human Pilot role is necessary for assistance and understanding, and that there are hidden dependencies in the human-robot team that affect Mission Specialist performance

    Performance and Usability Evaluation Scheme for Mobile Manipulator Teleoperation

    Get PDF
    This article presents a standardized human–robot teleoperation interface (HRTI) evaluation scheme for mobile manipulators. Teleoperation remains the predominant control type for mobile manipulators in open environments, particularly for quadruped manipulators. However, mobile manipulators, especially quadruped manipulators, are relatively novel systems to be implemented in the industry compared to traditional machinery. Consequently, no standardized interface evaluation method has been established for them. The proposed scheme is the first of its kind in evaluating mobile manipulator teleoperation. It comprises a set of robot motion tests, objective measures, subjective measures, and a prediction model to provide a comprehensive evaluation. The motion tests encompass locomotion, manipulation, and a combined test. The duration for each trial is collected as the response variable in the objective measure. Statistical tools, including mean value, standard deviation, and T-test, are utilized to cross-compare between different predictor variables. Based on an extended Fitts' law, the prediction model employs the time and mission difficulty index to forecast system performance in future missions. The subjective measures utilize the NASA-task load index and the system usability scale to assess workload and usability. Finally, the proposed scheme is implemented on a real-world quadruped manipulator with two widely-used HRTIs, the gamepad and the wearable motion capture system

    NASA Tech Briefs, December 2006

    Get PDF
    Topic include: Inferring Gear Damage from Oil-Debris and Vibration Data; Forecasting of Storm-Surge Floods Using ADCIRC and Optimized DEMs; User Interactive Software for Analysis of Human Physiological Data; Representation of Serendipitous Scientific Data; Automatic Locking of Laser Frequency to an Absorption Peak; Self-Passivating Lithium/Solid Electrolyte/Iodine Cells; Four-Quadrant Analog Multipliers Using G4-FETs; Noise Source for Calibrating a Microwave Polarimeter; Hybrid Deployable Foam Antennas and Reflectors; Coating MCPs with AlN and GaN; Domed, 40-cm-Diameter Ion Optics for an Ion Thruster; Gesture-Controlled Interfaces for Self-Service Machines; Dynamically Alterable Arrays of Polymorphic Data Types; Identifying Trends in Deep Space Network Monitor Data; Predicting Lifetime of a Thermomechanically Loaded Component; Partial Automation of Requirements Tracing; Automated Synthesis of Architecture of Avionic Systems; SSRL Emergency Response Shore Tool; Wholly Aromatic Ether-Imides as n-Type Semiconductors; Carbon-Nanotube-Carpet Heat-Transfer Pads; Pulse-Flow Microencapsulation System; Automated Low-Gravitation Facility Would Make Optical Fibers; Alignment Cube with One Diffractive Face; Graphite Composite Booms with Integral Hinges; Tool for Sampling Permafrost on a Remote Planet; and Special Semaphore Scheme for UHF Spacecraft Communications

    Social Media Data in an Augmented Reality System for Situation Awareness Support in Emergency Control Rooms

    Get PDF
    During crisis situations, emergency operators require fast information access to achieve situation awareness and make the best possible decisions. Augmented reality could be used to visualize the wealth of user-generated content available on social media and enable context-adaptive functions for emergency operators. Although emergency operators agree that social media analytics will be important for their future work, it poses a challenge to filter and visualize large amounts of social media data. We conducted a goal-directed task analysis to identify the situation awareness requirements of emergency operators. By collecting tweets during two storms in Germany we evaluated the usefulness of Twitter data for achieving situation awareness and conducted interviews with emergency operators to derive filter strategies for social media data. We synthesized the results by discussing how the unique interface of augmented reality can be used to integrate social media data into emergency control rooms for situation awareness support.publishedVersio

    Robotic Assisted Fracture Surgery

    Get PDF
    • …
    corecore