162 research outputs found

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset

    Autonomous robotic system for thermographic detection of defects in upper layers of carbon fiber reinforced polymers

    Get PDF
    Carbon Fiber Reinforced Polymers (CFRPs) are composites whose interesting properties, like high strength-to-weight ratio and rigidity, are of interest in many industrial fields. Many defects affecting their production process are due to the wrong distribution of the thermosetting polymer in the upper layers. In this work, they are effectively and efficiently detected by automatically analyzing the thermographic images obtained by Pulsed Phase Thermography (PPT) and comparing them with a defect-free reference. The flash lamp and infrared camera needed by PPT are mounted on an industrial robot so that surfaces of CFRP automotive components, car side blades in our case, can be inspected in a series of static tests. The thermographic image analysis is based on local contrast adjustment via UnSharp Masking (USM) and takes also advantage of the high level of knowledge of the entire system provided by the calibration procedures. This system could replace manual inspection leading to a substantial increase in efficiency

    Automatic Color Inspection for Colored Wires in Electric Cables

    Get PDF
    In this paper, an automatic optical inspection system for checking the sequence of colored wires in electric cable is presented. The system is able to inspect cables with flat connectors differing in the type and number of wires. This variability is managed in an automatic way by means of a self-learning subsystem and does not require manual input from the operator or loading new data to the machine. The system is coupled to a connector crimping machine and once the model of a correct cable is learned, it can automatically inspect each cable assembled by the machine. The main contributions of this paper are: (i) the self-learning system; (ii) a robust segmentation algorithm for extracting wires from images even if they are strongly bent and partially overlapped; (iii) a color recognition algorithm able to cope with highlights and different finishing of the wire insulation. We report the system evaluation over a period of several months during the actual production of large batches of different cables; tests demonstrated a high level of accuracy and the absence of false negatives, which is a key point in order to guarantee defect-free productions

    Teaching humanoid robotics by means of human teleoperation through RGB-D sensors

    Get PDF
    This paper presents a graduate course project on humanoid robotics offered by the University of Padova. The target is to safely lift an object by teleoperating a small humanoid. Students have to map human limbs into robot joints, guarantee the robot stability during the motion, and teleoperate the robot to perform the correct movement. We introduce the following innovative aspects with respect to classical robotic classes: i) the use of humanoid robots as teaching tools; ii) the simplification of the stable locomotion problem by exploiting the potential of teleoperation; iii) the adoption of a Project-Based Learning constructivist approach as teaching methodology. The learning objectives of both course and project are introduced and compared with the students\u2019 background. Design and constraints students have to deal with are reported, together with the amount of time they and their instructors dedicated to solve tasks. A set of evaluation results are provided in order to validate the authors\u2019 purpose, including the students\u2019 personal feedback. A discussion about possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels

    Efficient completeness inspection using real-time 3D color reconstruction with a dual-laser triangulation system

    Get PDF
    In this chapter, we present the final system resulting from the European Project \u201d3DComplete\u201d aimed at creating a low-cost and flexible quality inspection system capable of capturing 2.5D color data for completeness inspection. The system uses a single color camera to capture at the same time 3D data with laser triangulation and color texture with a special projector of a narrow line of white light, which are then combined into a color 2.5D model in real-time. Many examples of completeness inspection tasks are reported which are extremely difficult to analyze with state-of-the-art 2D-based methods. Our system has been integrated into a real production environment, showing that completeness inspection incorporating 3D technology can be readily achieved in a short time at low costs

    Ensemble of Different Approaches for a Reliable Person Re-identification System

    Get PDF
    An ensemble of approaches for reliable person re-identification is proposed in this paper. The proposed ensemble is built combining widely used person re-identification systems using different color spaces and some variants of state-of-the-art approaches that are proposed in this paper. Different descriptors are tested, and both texture and color features are extracted from the images; then the different descriptors are compared using different distance measures (e.g., the Euclidean distance, angle, and the Jeffrey distance). To improve performance, a method based on skeleton detection, extracted from the depth map, is also applied when the depth map is available. The proposed ensemble is validated on three widely used datasets (CAVIAR4REID, IAS, and VIPeR), keeping the same parameter set of each approach constant across all tests to avoid overfitting and to demonstrate that the proposed system can be considered a general-purpose person re-identification system. Our experimental results show that the proposed system offers significant improvements over baseline approaches. The source code used for the approaches tested in this paper will be available at https://www.dei.unipd.it/node/2357 and http://robotics.dei.unipd.it/reid/

    RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation

    Full text link
    This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously navigate through, identify, and reach areas of interest; and there recognize, localize, and manipulate work tools to perform complex manipulation tasks. The proposed contribution includes a modular software architecture where each module solves specific sub-tasks and that can be easily enlarged to satisfy new requirements. Included indoor and outdoor tests demonstrate the capability of the proposed system to autonomously detect a target object (a panel) and precisely dock in front of it while avoiding obstacles. They show it can autonomously recognize and manipulate target work tools (i.e., wrenches and valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve stem). A specific case study is described where the proposed modular architecture lets easy switch to a semi-teleoperated mode. The paper exhaustively describes description of both the hardware and software setup of RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International Robotics Challenge, and the lessons we learned when participating at this competition, where we ranked third in the Gran Challenge in collaboration with the Czech Technical University in Prague, the University of Pennsylvania, and the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics, published by Taylor & Franci

    Effect of lower limb exoskeleton on the modulation of neural activity and gait classification

    Get PDF
    : Neurorehabilitation with robotic devices requires a paradigm shift to enhance human-robot interaction. The coupling of robot assisted gait training (RAGT) with a brain-machine interface (BMI) represents an important step in this direction but requires better elucidation of the effect of RAGT on the user's neural modulation. Here, we investigated how different exoskeleton walking modes modify brain and muscular activity during exoskeleton assisted gait. We recorded electroencephalographic (EEG) and electromyographic (EMG) activity from ten able-bodied volunteers walking with an exoskeleton with three modes of user assistance (i.e., transparent, adaptive and full assistance) and during free overground gait. Results identified that exoskeleton walking (irrespective of the exoskeleton mode) induces a stronger modulation of central mid-line mu (8-13 Hz) and low-beta (14-20 Hz) rhythms compared to free overground walking. These modifications are accompanied by a significant re-organization of the EMG patterns in exoskeleton walking. On the other hand, we observed no significant differences in neural activity during exoskeleton walking with the different assistance levels. We subsequently implemented four gait classifiers based on deep neural networks trained on the EEG data during the different walking conditions. Our hypothesis was that exoskeleton modes could impact the creation of a BMI-driven RAGT. We demonstrated that all classifiers achieved an average accuracy of 84.13 ± 3.49% in classifying swing and stance phases on their respective datasets. In addition, we demonstrated that the classifier trained on the transparent mode exoskeleton data can classify gait phases during adaptive and full modes with an accuracy of 78.3 ± 4.8%, while the classifier trained on free overground walking data fails to classify the gait during exoskeleton walking (accuracy of 59.4 ± 11.8%). These findings provide important insights into the effect of robotic training on neural activity and contribute to the advancement of BMI technology for improving robotic gait rehabilitation therapy
    • …
    corecore