829 research outputs found

    A machine learning approach for collaborative robot smart manufacturing inspection for quality control systems1-10

    Get PDF
    The 4th industrial revolution promotes the automatic inspection of all products towards a zero-defect and high-quality manufacturing. In this context, collaborative robotics, where humans and machines share the same space, comprises a suitable approach that allows combining the accuracy of a robot and the ability and flexibility of a human. This paper describes an innovative approach that uses a collaborative robot to support the smart inspection and corrective actions for quality control systems in the manufacturing process, complemented by an intelligent system that learns and adapts its behavior according to the inspected parts. This intelligent system that implements the reinforcement learning algorithm makes the approach more robust once it can learn and be adapted to the trajectory. In the preliminary experiments, it was used a UR3 robot equipped with a Force-Torque sensor that was trained to perform a path regarding a product quality inspection task. © 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of the FAIM 2021.This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UIDB/05757/2020info:eu-repo/semantics/publishedVersio

    Neural Dynamic Movement Primitives -- a survey

    Full text link
    One of the most important challenges in robotics is producing accurate trajectories and controlling their dynamic parameters so that the robots can perform different tasks. The ability to provide such motion control is closely related to how such movements are encoded. Advances on deep learning have had a strong repercussion in the development of novel approaches for Dynamic Movement Primitives. In this work, we survey scientific literature related to Neural Dynamic Movement Primitives, to complement existing surveys on Dynamic Movement Primitives

    Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions

    Get PDF
    Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult

    Deep reinforcement learning for soft, flexible robots : brief review with impending challenges

    Get PDF
    The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to the sprouting of a relatively new yet rewarding sphere of technology in intelligent soft robotics. The fusion of deep reinforcement algorithms with soft bio-inspired structures positively directs to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment. For soft robotic structures possessing countless degrees of freedom, it is at times not convenient to formulate mathematical models necessary for training a deep reinforcement learning (DRL) agent. Deploying current imitation learning algorithms on soft robotic systems has provided competent results. This review article posits an overview of various such algorithms along with instances of being applied to real-world scenarios, yielding frontier results. Brief descriptions highlight the various pristine branches of DRL research in soft robotics
    corecore