209 research outputs found

    Interfaces graphiques tridimentionnelles de téléopération de plateformes robotiques mobiles

    Get PDF
    Les besoins grandissant en santĂ© rendent des technologies comme la tĂ©lĂ©prĂ©sence Ă  domicile de plus en plus intĂ©ressantes. Cependant, dans le domaine des interfaces humains-machines, il est souvent notĂ© que nĂ©gliger la façon dont est prĂ©sentĂ©e l'information provenant du robot peut nuire Ă  l'opĂ©rateur dans sa comprĂ©hension de la situation, ce qui entraĂźne une efficacitĂ© rĂ©duite. C'est en considĂ©rant la façon dont est traitĂ©e l'information chez l'opĂ©rateur que nous arriverons Ă  dĂ©velopper une interface permettant d'allouer le maximum des capacitĂ©s cognitives de l'opĂ©rateur Ă  la tĂąche. De plus, les dĂ©veloppements rĂ©cents de matĂ©riel Ă  haute performance et Ă  coĂ»ts rĂ©duits nous permettent de mettre en oeuvre des techniques modernes de traitement d'images en temps rĂ©el. Nous proposons donc de dĂ©velopper un systĂšme flexible pour Ă©tudier les diffĂ©rentes façons de prĂ©senter l'information pertinente Ă  la navigation efficace d'une plateforme robotique mobile. Ce systĂšme est basĂ© sur une reconstruction en trois dimensions de l'environnement parcouru Ă  partir des lectures de capteurs retrouvĂ©s couramment sur ces plateformes. De plus, l'utilisation d'une camĂ©ra vidĂ©o stĂ©rĂ©oscopique permet de reproduire l'effet de perspective tel qu'une personne sur place le percevrait. La prĂ©sence d'un flux vidĂ©o est souvent apprĂ©ciĂ©e par les opĂ©rateurs et nous croyons que d'ajouter la profondeur dans notre reproduction de celui-ci est un avantage. Finalement, la camĂ©ra virtuelle de l'interface peut ĂȘtre continuellement rĂ©orientĂ©e de façon Ă  fournir une perspective soit Ă©gocentrique, soit exocentrique, selon les prĂ©fĂ©rences de l'opĂ©rateur. Nous validons l'utilisation de ce systĂšme en Ă©valuant selon diffĂ©rentes mĂ©triques les performances d'opĂ©rateurs, autant nĂ©ophytes qu'experts en robotique mobile, de façon Ă  bien cibler les besoins fonctionnels de ce genre d'interfaces et leurs Ă©valuations avec des populations-cibles. Nous croyons que la flexibilitĂ© quant au positionnement de la camĂ©ra virtuelle de l'interface demeure l'aspect le plus important du systĂšme. En effet, nous nous attendons Ăą ce que cela permette Ă  chaque opĂ©rateur d'adapter l'interface Ă  ses prĂ©fĂ©rences et les tĂąches en cours pour qu'il effectue son travail le plus efficacement possible. Bien que nous n'incluons pas de tĂąches spĂ©cifiques au domaine de la tĂ©lĂ©santĂ© dans nos expĂ©rimentations, nous croyons que les observations de ce travail quant Ă  la tĂ©lĂ©opĂ©ration en gĂ©nĂ©ral pourront s'appliquer Ă©ventuellement Ă  ce domaine en particulier

    Elicitation of trustworthiness requirements for highly dexterous teleoperation systems with signal latency

    Get PDF
    IntroductionTeleoperated robotic manipulators allow us to bring human dexterity and cognition to hard-to-reach places on Earth and in space. In long-distance teleoperation, however, the limits of the speed of light results in an unavoidable and perceivable signal delay. The resultant disconnect between command, action, and feedback means that systems often behave unexpectedly, reducing operators' trust in their systems. If we are to widely adopt telemanipulation technology in high-latency applications, we must identify and specify what would make these systems trustworthy.MethodsIn this requirements elicitation study, we present the results of 13 interviews with expert operators of remote machinery from four different application areas—nuclear reactor maintenance, robot-assisted surgery, underwater exploration, and ordnance disposal—exploring which features, techniques, or experiences lead them to trust their systems.ResultsWe found that across all applications, except for surgery, the top-priority requirement for developing trust is that operators must have a comprehensive engineering understanding of the systems' capabilities and limitations. The remaining requirements can be summarized into three areas: improving situational awareness, facilitating operator training, and familiarity, and easing the operator's cognitive load.DiscussionWhile the inclusion of technical features to assist the operators was welcomed, these were given lower priority than non-technical, user-centric approaches. The signal delays in the participants' systems ranged from none perceived to 1 min, and included examples of successful dexterous telemanipulation for maintenance tasks with a 2 s delay. As this is comparable to Earth-to-orbit and Earth-to-Moon delays, the requirements discussed could be transferable to telemanipulation tasks in space

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput

    The Utility of Measures of Attention and Situation Awareness for Quantifying Telepresence

    Get PDF
    Telepresence is defined as the sensation of being present at a remote robot task site while physically present at a local control station. This concept has received substantial attention in the recent past as a result of hypothesized benefits of presence experiences on human task performance with teleoperation systems. Human factors research, however, has made little progress in establishing a relationship between the concept of telepresence and teleoperator performance. This has been attributed to the multidimensional nature of telepresence, the lack of appropriate studies to elucidate this relationship, and the lack of a valid and reliable, objective measure of telepresence. Subjective measures (e.g., questionnaires, rating scales) are most commonly used to measure telepresence. Objective measures have been proposed, including behavioral responses to stimuli presented in virtual worlds (e.g. ducking virtual objects). Other research has suggested use of physiological measures, such as cardiovascular responses to indicate the extent of telepresence experiences in teleoperation tasks. The objective of the present study was to assess the utility of using measures of attention allocation and situation awareness (SA) to objectively describe telepresence. Attention and SA have been identified as cognitive constructs potentially underlying telepresence experiences. Participants in this study performed a virtual mine neutralization task involving remote control of a simulated robotic rover and integrated tools to locate, uncover, and dispose of mines. Subjects simultaneously completed two secondary tasks that required them to monitor for low battery signals associated with operation of the vehicle and controls. Subjects were divided into three groups of eight according to task difficulty, which was manipulated by varying the number, and spacing, of mines in the task environment. Performance was measured as average time to neutralize four mines. Telepresence was assessed using a Presence questionnaire. Situation awareness was measured using the Situation Awareness Global Assessment Technique. Attention was measured as a ratio of the number of ?low battery signal detections to the total number of signals presented through the secondary task displays. Analysis of variance results revealed level of difficulty to significantly affect performance time and telepresence. Regression analysis revealed level of difficulty, immersive tendencies, and attention to explain significant portions of the variance in telepresence

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Virtual reality aided vehicle teleoperation

    Get PDF
    This thesis describes a novel approach to vehicle teleoperation. Vehicle teleoperation is the human mediated control of a vehicle from a remote location. Typical methods for providing updates of the world around the vehicle use vehicle mounted video cameras. This methodology suffers from two problems: lag and limited field of view. Lag is the amount of time it takes for a signal to travel from the operator\u27s location to the vehicle. This lag causes the images from the camera and commands from the operator to be delayed. This behavior is a serious problem when the vehicle is approaching an obstacle. If the delay is long enough, the vehicle might crash into an obstacle before the operator knows that it is there. To complicate matters, most cameras provide only a small arc of visibility around the vehicle that leaves a significant blind spot. Therefore, hazards close to the vehicle might not be visible to the operator, such as a rock behind and to the left of the vehicle. In that case, if the vehicle were maneuvered sharply to the left, it might impact the rock. Virtual reality has been used to attack these two problems. A simulation of the vehicle is used to predict its positional response to inputs. This response is then displayed in a virtual world that mimics the operational environment. A dynamics algorithm called the wagon tongue method is used by a computer at the remote site to correct for inaccuracies between the simulated vehicle position and the actual vehicle position. The wagon tongue method eliminates the effect of the average lag value. Synchronization code is used to ensure that the vehicle executes commands with the same amount of time between them as when the operator issued them. This system behavior eliminates the effects of lag variation. The problem of limited field of view is solved by using a virtual camera viewpoint behind the vehicle that displays the entire world around the vehicle. This thesis develops and compares a system using virtual reality aided teleoperation with direct control and vehicle mounted camera aided teleoperation

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
    • 

    corecore