4,071 research outputs found

    A vision-based teleoperation system for robotic systems

    Get PDF
    Despite advances in robotic perception are increasing autonomous capabilities, human intelligence is still considered a necessity in unstructured or unpredictable environments. Hence, also according to the Industry 4.0 paradigm, humans and robots are encouraged to achieve mutual Human-Robot Interaction (HRI). HRI can be physical (pHRI) or not, depending on the assigned task. For example, when the robot is constrained in a dangerous environment or must handle hazardous materials, pHRI is not recommended. In these cases, robot teleoperation may be necessary. A teleoperation system concerns with the exploration and exploitation of spaces where the user presence is not allowed. Therefore, the operator needs to move the robot remotely. Although plenty of human-machine interfaces for teleoperation have been developed considering a mechanical device, vision-based interfaces do not require physical contact with external devices. This grants a more natural and intuitive interaction, which is reflected in task performance. Our proposed system is a novel robot teleoperation system that exploits RGB cameras, which are easy to use and commonly available on the market at a reduced price. A ROS-based framework has been developed to supply hand tracking and hand-gesture recognition features, exploiting the OpenPose software based on the Deep Learning framework Caffe. This, in combination with the ease of availability of an RGB camera, leads the framework to be strongly open-source-oriented and highly replicable on all ROS-based platforms. It is worth noting that the system does not include the Z-axis control in this first version. This is due to the high precision and sensitivity required to robustly control the third axis, a precision that 3D vision systems are not able to provide unless very expensive devices are adopted. Our aim is to further develop the system to include the third axis control in a future release

    A gesture-based robot program building software

    Get PDF
    With the advent of intelligent systems, industrial workstations and working areas have undergone a revolution. The increased need for automation is satisfied using high-performance industrial robots in fully automated workstations. In the manufacturing industry, sophisticated tasks still require human intervention in completely manual workstations, even if at a slower production rate. To improve the efficiency of manual workstations, Collaborative Robots (Co-Bots) have been designed as part of the Industry 4.0 paradigm. These robots collaborate with humans in safe environments to support the workers in their tasks, thus achieving higher production rates compared to completely manual workstations. The key factor is that their adoption relieves humans from stressful and heavy operations, decreasing job-related health issues. The drawback of Co-Bots stands in their design: to work side-by-side with humans they must guarantee safety; thus, they have very strict limitations on their forces and velocities, which limits their efficiency, especially when performing non-trivial tasks. To overcome these limitations, our idea is to design Meta-Collaborative workstations (MCWs), where the robot can operate behind a safety cage, either physical or virtual, and the operator can interact with the robot, either industrial or Collaborative, by means of the same communication channel. Our proposed system has been developed to easily build robot programs purposely designed for MCWs, based on (i) the recognition of hand gestures (using a vision-based communication channel) and (ii) ROS to carry out communication with the robot

    Validation of a smart mirror for gesture recognition in gym training performed by a vision-based deep learning system

    Get PDF
    This paper illustrates the development and validation of a smart mirror for sports training. The application is based on the skeletonization algorithm MediaPipe and runs on an embedded device Nvidia Jetson Nano equipped with two fisheye cameras. The software has been evaluated considering the exercise biceps curl. The elbow angle has been measured by both MediaPipe and the motion capture system BTS (ground truth), and the resulting values have been compared to determine angle uncertainty, residual errors, and intra-subject and inter-subject repeatability. The uncertainty of the joints’ estimation and the quality of the image captured by the cameras reflect on the final uncertainty of the indicator over time, highlighting the areas of improvement for further development
    • …
    corecore