93 research outputs found

    Development of an interface for intuitive teleoperation of Comau manipulator robots using RGB-D sensors

    Get PDF
    In this work we developed an intuitive way to teleoperate a Comau industrial robot by means of a Microsoft Kinect device, in order to control directly the manipulator joints by retargeting specific human motion. The motion remapping into the robot joints has been done by computing angles between vectors built from positions of human joints. The developed teleoperation system has been successfully tested on two real Comau robots, revealing to be fast and strongly reliable.ope

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Online Markerless Augmented Reality for Remote Handling System in Bad Viewing Conditions

    Get PDF
    This thesis studies the development of Augmented Reality (AR) used in ITER mock-up remote handling environment. An important goal for employing an AR system is three-dimensional mapping of scene that provides the environmental position and orientation information for the operator. Remote Handling (RH) in harsh environments usually has to tackle the lack of sufficient visual feedback for the human operator due to limited numbers of on-site cameras and poor viewing angles etc. AR enables the user to perceive virtual computer-generated objects in a real scene, the most common goals usually including visibility enhancement and provision of extra information, such as positional data of various objects. The proposed AR system first, recognizes and locates the object by using the template-based matching algorithm and second step is to augment the virtual model on top of the found object. A tracking algorithm is exploited for locating the object in a sequence of frames. Conceptually, the template is found in each sequence by computing the similarity between the template and the image for all relevant poses (rotation and translation) of template. The objective of this thesis is to investigate if ITER remote handling at DTP2 (Divertor Test Platform 2) can benefit from AR technology. The AR interface specifies the measurement values, orientation and transformation of markerless WHMAN (Water Hydraulic Manipulator) in efficient real-time tracking. The performance of this AR system is tested with different positions and the method in this thesis was validated in a real remote handling environment at DTP2 and proved robust enough for it. /Kir1

    Intuitive Teleoperation of an Intelligent Robotic System Using Low-Cost 6-DOF Motion Capture

    Get PDF
    There is currently a wide variety of six degree-of-freedom (6-DOF) motion capture technologies available. However, these systems tend to be very expensive and thus cost prohibitive. A software system was developed to provide 6-DOF motion capture using the Nintendo Wii remote’s (wiimote) sensors, an infrared beacon, and a novel hierarchical linear-quaternion Kalman filter. The software is made freely available, and the hardware costs less than one hundred dollars. Using this motion capture software, a robotic control system was developed to teleoperate a 6-DOF robotic manipulator via the operator’s natural hand movements. The teleoperation system requires calibration of the wiimote’s infrared cameras to obtain an estimate of the wiimote’s 6-DOF pose. However, since the raw images from the wiimote’s infrared camera are not available, a novel camera-calibration method was developed to obtain the camera’s intrinsic parameters, which are used to obtain a low-accuracy estimate of the 6-DOF pose. By fusing the low-accuracy estimate of 6-DOF pose with accelerometer and gyroscope measurements, an accurate estimation of 6-DOF pose is obtained for teleoperation. Preliminary testing suggests that the motion capture system has an accuracy of less than a millimetre in position and less than one degree in attitude. Furthermore, whole-system tests demonstrate that the teleoperation system is capable of controlling the end effector of a robotic manipulator to match the pose of the wiimote. Since this system can provide 6-DOF motion capture at a fraction of the cost of traditional methods, it has wide applicability in the field of robotics and as a 6-DOF human input device to control 3D virtual computer environments

    Human to robot hand motion mapping methods: review and classification

    Get PDF
    In this article, the variety of approaches proposed in literature to address the problem of mapping human to robot hand motions are summarized and discussed. We particularly attempt to organize under macro-categories the great quantity of presented methods, that are often difficult to be seen from a general point of view due to different fields of application, specific use of algorithms, terminology and declared goals of the mappings. Firstly, a brief historical overview is reported, in order to provide a look on the emergence of the human to robot hand mapping problem as a both conceptual and analytical challenge that is still open nowadays. Thereafter, the survey mainly focuses on a classification of modern mapping methods under six categories: direct joint, direct Cartesian, taskoriented, dimensionality reduction based, pose recognition based and hybrid mappings. For each of these categories, the general view that associates the related reported studies is provided, and representative references are highlighted. Finally, a concluding discussion along with the authors’ point of view regarding future desirable trends are reported.This work was supported in part by the European Commission’s Horizon 2020 Framework Programme with the project REMODEL under Grant 870133 and in part by the Spanish Government under Grant PID2020-114819GB-I00.Peer ReviewedPostprint (published version
    corecore