23 research outputs found

    Multistream realtime control of a distributed telerobotic system

    Get PDF

    Enhancing depth cues with AR visualization for forklift operation assistance in warehouse.

    Get PDF
    With warehouse operations contributing to the major part of logistics, architects tend to utilize every inch of the space allocated to maximize the stacking space. Increasing the height of the aisles and narrowing down the aisle-aisle space are major design issues in doing so. Even though forklift manufacturing companies introduced high reach trucks and forklifts for narrow aisles, forklift operators face many issues while working with heavy pallets. This thesis focused on developing a systemthat uses Augmented Reality(AR) to aid forklift operators in performing their pallet racking and pick up tasks. It used AR technology to superimpose virtual cues over the real world specifying the pallets to be picked up and moved and also assist in operating the forklift using depth cues. This aims to increase the productivity of the forklift operators in the warehouse. Depth cues are overlaid on a live video feed from a camera attached to the front of the forklift which was displayed using a laptop to the participants. To evaluate the usability of the system designed, an experiment was conducted and the performance results and the feedback from the participants was evaluated. A remote controlled toy forklift was used to conduct the experiment and a motion tracking system was set-up to track the cab and pallet. Simple pallet handling tasks were designed for the participants and their performance and feedback was collected and analysed. This thesis shows how AR offers a simple and effecient solution for the problems faced by forklift operators while performing pallet handling tasks in warehouse

    An Original Approach for a Better Remote Control of an Assistive Robot

    Get PDF
    Many researches have been done in the field of assistive robotics in the last few years. The first application field was helping with the disabled people\\u27s assistance. Different works have been performed on robotic arms in three kinds of situations. In the first case, static arm, the arm was principally dedicated to office tasks like telephone, fax... Several autonomous modes exist which need to know the precise position of objects. In the second configuration, the arm is mounted on a wheelchair. It follows the person who can employ it in more use cases. But if the person must stay in her/his bed, the arm is no more useful. In a third configuration, the arm is mounted on a separate platform. This configuration allows the largest number of use cases but also poses more difficulties for piloting the robot. The second application field of assistive robotics deals with the assistance at home of people losing their autonomy, for example a person with cognitive impairment. In this case, the assistance deals with two main points: security and cognitive stimulation. In order to ensure the safety of the person at home, different kinds of sensors can be used to detect alarming situations (falls, low cardiac pulse rate...). For assisting a distant operator in alarm detection, the idea is to give him the possibility to have complementary information from a mobile robot about the person\\u27s activity at home and to be in contact with the person. Cognitive stimulation is one of the therapeutic means used to maintain as long as possible the maximum of the cognitive capacities of the person. In this case, the robot can be used to bring to the person cognitive stimulation exercises and stimulate the person to perform them. To perform these tasks, it is very difficult to have a totally autonomous robot. In the case of disabled people assistance, it is even not the will of the persons who want to act by themselves. The idea is to develop a semi-autonomous robot that a remote operator can manually pilot with some driving assistances. This is a realistic and somehow desired solution. To achieve that, several scientific problems have to be studied. The first one is human-machine-cooperation. How a remote human operator can control a robot to perform a desired task? One of the key points is to permit the user to understand clearly the way the robot works. Our original approach is to analyse this understanding through appropriation concept introduced by Piaget in 1936. As the robot must have capacities of perceptio

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Perception-driven approaches to real-time remote immersive visualization

    Get PDF
    In remote immersive visualization systems, real-time 3D perception through RGB-D cameras, combined with modern Virtual Reality (VR) interfaces, enhances the user’s sense of presence in a remote scene through 3D reconstruction rendered in a remote immersive visualization system. Particularly, in situations when there is a need to visualize, explore and perform tasks in inaccessible environments, too hazardous or distant. However, a remote visualization system requires the entire pipeline from 3D data acquisition to VR rendering satisfies the speed, throughput, and high visual realism. Mainly when using point-cloud, there is a fundamental quality difference between the acquired data of the physical world and the displayed data because of network latency and throughput limitations that negatively impact the sense of presence and provoke cybersickness. This thesis presents state-of-the-art research to address these problems by taking the human visual system as inspiration, from sensor data acquisition to VR rendering. The human visual system does not have a uniform vision across the field of view; It has the sharpest visual acuity at the center of the field of view. The acuity falls off towards the periphery. The peripheral vision provides lower resolution to guide the eye movements so that the central vision visits all the interesting crucial parts. As a first contribution, the thesis developed remote visualization strategies that utilize the acuity fall-off to facilitate the processing, transmission, buffering, and rendering in VR of 3D reconstructed scenes while simultaneously reducing throughput requirements and latency. As a second contribution, the thesis looked into attentional mechanisms to select and draw user engagement to specific information from the dynamic spatio-temporal environment. It proposed a strategy to analyze the remote scene concerning the 3D structure of the scene, its layout, and the spatial, functional, and semantic relationships between objects in the scene. The strategy primarily focuses on analyzing the scene with models the human visual perception uses. It sets a more significant proportion of computational resources on objects of interest and creates a more realistic visualization. As a supplementary contribution, A new volumetric point-cloud density-based Peak Signal-to-Noise Ratio (PSNR) metric is proposed to evaluate the introduced techniques. An in-depth evaluation of the presented systems, comparative examination of the proposed point cloud metric, user studies, and experiments demonstrated that the methods introduced in this thesis are visually superior while significantly reducing latency and throughput

    Online Markerless Augmented Reality for Remote Handling System in Bad Viewing Conditions

    Get PDF
    This thesis studies the development of Augmented Reality (AR) used in ITER mock-up remote handling environment. An important goal for employing an AR system is three-dimensional mapping of scene that provides the environmental position and orientation information for the operator. Remote Handling (RH) in harsh environments usually has to tackle the lack of sufficient visual feedback for the human operator due to limited numbers of on-site cameras and poor viewing angles etc. AR enables the user to perceive virtual computer-generated objects in a real scene, the most common goals usually including visibility enhancement and provision of extra information, such as positional data of various objects. The proposed AR system first, recognizes and locates the object by using the template-based matching algorithm and second step is to augment the virtual model on top of the found object. A tracking algorithm is exploited for locating the object in a sequence of frames. Conceptually, the template is found in each sequence by computing the similarity between the template and the image for all relevant poses (rotation and translation) of template. The objective of this thesis is to investigate if ITER remote handling at DTP2 (Divertor Test Platform 2) can benefit from AR technology. The AR interface specifies the measurement values, orientation and transformation of markerless WHMAN (Water Hydraulic Manipulator) in efficient real-time tracking. The performance of this AR system is tested with different positions and the method in this thesis was validated in a real remote handling environment at DTP2 and proved robust enough for it. /Kir1
    corecore