976 research outputs found

    Enhancing tele-operation - Investigating the effect of sensory feedback on performance

    Get PDF
    The decline in the number of healthcare service providers in comparison to the growing numbers of service users prompts the development of technologies to improve the efficiency of healthcare services. One such technology which could offer support are assistive robots, remotely tele-operated to provide assistive care and support for older adults with assistive care needs and people living with disabilities. Tele-operation makes it possible to provide human-in-the-loop robotic assistance while also addressing safety concerns in the use of autonomous robots around humans. Unlike many other applications of robot tele-operation, safety is particularly significant as the tele-operated assistive robots will be used in close proximity to vulnerable human users. It is therefore important to provide as much information about the robot (and the robot workspace) as possible to the tele-operators to ensure safety, as well as efficiency. Since robot tele-operation is relatively unexplored in the context of assisted living, this thesis explores different feedback modalities that may be employed to communicate sensor information to tele-operators. The thesis presents research as it transitioned from identifying and evaluating additional feedback modalities that may be used to supplement video feedback, to exploring different strategies for communicating the different feedback modalities. Due to the fact that some of the sensors and feedback needed are not readily available, different design iterations were carried out to develop the necessary hardware and software for the studies carried out. The first human study was carried out to investigate the effect of feedback on tele-operator performance. Performance was measured in terms of task completion time, ease of use of the system, number of robot joint movements, and success or failure of the task. The effect of verbal feedback between the tele-operator and service users was also investigated. Feedback modalities have differing effects on performance metrics and as a result, the choice of optimal feedback may vary from task to task. Results show that participants preferred scenarios with verbal feedback relative to scenarios without verbal feedback, which also reflects in their performance. Gaze metrics from the study also showed that it may be possible to understand how tele-operators interact with the system based on their areas of interest as they carry out tasks. This findings suggest that such studies can be used to improve the design of tele-operation systems.The need for social interaction between the tele-operator and service user suggests that visual and auditory feedback modalities will be engaged as tasks are carried out. This further reduces the number of available sensory modalities through which information can be communicated to tele-operators. A wrist-worn Wi-Fi enabled haptic feedback device was therefore developed and a study was carried out to investigate haptic sensitivities across the wrist. Results suggest that different locations on the wrist have varying sensitivities to haptic stimulation with and without video distraction, duration of haptic stimulation, and varying amplitudes of stimulation. This suggests that dynamic control of haptic feedback can be used to improve haptic perception across the wrist, and it may also be possible to display more than one type of sensor data to tele-operators during a task. The final study carried out was designed to investigate if participants can differentiate between different types of sensor data conveyed through different locations on the wrist via haptic feedback. The effect of increased number of attempts on performance was also investigated. Total task completion time decreased with task repetition. Participants with prior gaming and robot experience had a more significant reduction in total task completion time when compared to participants without prior gaming and robot experience. Reduction in task completion time was noticed for all stages of the task but participants with additional feedback had higher task completion time than participants without supplementary feedback. Reduction in task completion time varied for different stages of the task. Even though gripper trajectory reduced with task repetition, participants with supplementary feedback had longer gripper trajectories than participants without supplementary feedback, while participants with prior gaming experience had shorter gripper trajectories than participants without prior gaming experience. Perceived workload was also found to reduce with task repetition but perceived workload was higher for participants with feedback reported higher perceived workload than participants without feedback. However participants without feedback reported higher frustration than participants without feedback.Results show that the effect of feedback may not be significant where participants can get necessary information from video feedback. However, participants were fully dependent on feedback when video feedback could not provide requisite information needed.The findings presented in this thesis have potential applications in healthcare, and other applications of robot tele-operation and feedback. Findings can be used to improve feedback designs for tele-operation systems to ensure safe and efficient tele-operation. The thesis also provides ways visual feedback can be used with other feedback modalities. The haptic feedback designed in this research may also be used to provide situational awareness for the visually impaired

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    A Survey of Multi-Agent Human-Robot Interaction Systems

    Full text link
    This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.Comment: 23 pages, 7 figure

    A review on manipulation skill acquisition through teleoperation-based learning from demonstration

    Get PDF
    Manipulation skill learning and generalization have gained increasing attention due to the wide applications of robot manipulators and the spurt of robot learning techniques. Especially, the learning from demonstration method has been exploited widely and successfully in the robotic community, and it is regarded as a promising direction to realize the manipulation skill learning and generalization. In addition to the learning techniques, the immersive teleoperation enables the human to operate a remote robot with an intuitive interface and achieve the telepresence. Thus, it is a promising way to transfer manipulation skills from humans to robots by combining the learning methods and the teleoperation, and adapting the learned skills to different tasks in new situations. This review, therefore, aims to provide an overview of immersive teleoperation for skill learning and generalization to deal with complex manipulation tasks. To this end, the key technologies, e.g. manipulation skill learning, multimodal interfacing for teleoperation and telerobotic control, are introduced. Then, an overview is given in terms of the most important applications of immersive teleoperation platform for robot skill learning. Finally, this survey discusses the remaining open challenges and promising research topics

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry
    • …
    corecore