1,599 research outputs found

    Functional mimicry of Ruffini receptors with fibre Bragg gratings and deep neural networks enables a bio-inspired large-area tactile-sensitive skin

    Get PDF
    Collaborative robots are expected to physically interact with humans in daily living and the workplace, including industrial and healthcare settings. A key related enabling technology is tactile sensing, which currently requires addressing the outstanding scientific challenge to simultaneously detect contact location and intensity by means of soft conformable artificial skins adapting over large areas to the complex curved geometries of robot embodiments. In this work, the development of a large-area sensitive soft skin with a curved geometry is presented, allowing for robot total-body coverage through modular patches. The biomimetic skin consists of a soft polymeric matrix, resembling a human forearm, embedded with photonic fibre Bragg grating transducers, which partially mimics Ruffini mechanoreceptor functionality with diffuse, overlapping receptive fields. A convolutional neural network deep learning algorithm and a multigrid neuron integration process were implemented to decode the fibre Bragg grating sensor outputs for inference of contact force magnitude and localization through the skin surface. Results of 35 mN (interquartile range 56 mN) and 3.2 mm (interquartile range 2.3 mm) median errors were achieved for force and localization predictions, respectively. Demonstrations with an anthropomorphic arm pave the way towards artificial intelligence based integrated skins enabling safe human–robot cooperation via machine intelligence

    Encoding Multiple Sensor Data for Robotic Learning Skills from Multimodal Demonstration

    Get PDF
    © 2013 IEEE. Learning a task such as pushing something, where the constraints of both position and force have to be satisfied, is usually difficult for a collaborative robot. In this work, we propose a multimodal teaching-by-demonstration system which can enable the robot to perform this kind of tasks. The basic idea is to transfer the adaptation of multi-modal information from a human tutor to the robot by taking account of multiple sensor signals (i.e., motion trajectories, stiffness, and force profiles). The human tutor's stiffness is estimated based on the limb surface electromyography (EMG) signals obtained from the demonstration phase. The force profiles in Cartesian space are collected from a force/torque sensor mounted between the robot endpoint and the tool. Subsequently, the hidden semi-Markov model (HSMM) is used to encode the multiple signals in a unified manner. The correlations between position and the other three control variables (i.e., velocity, stiffness and force) are encoded with separate HSMM models. Based on the estimated parameters of the HSMM model, the Gaussian mixture regression (GMR) is then utilized to generate the expected control variables. The learned variables are further mapped into an impedance controller in the joint space through inverse kinematics for the reproduction of the task. Comparative tests have been conducted to verify the effectiveness of our approach on a Baxter robot

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    Assistant robot through deep learning

    Get PDF
    This article presents a work oriented to assistive robotics, where a scenario is established for a robot to reach a tool in the hand of a user, when they have verbally requested it by his name. For this, three convolutional neural networks are trained, one for recognition of a group of tools, which obtained an accuracy of 98% identifying the tools established for the application, that are scalpel, screwdriver and scissors; one for speech recognition, trained with the names of the tools in Spanish language, where its validation accuracy reach a 97.5% in the recognition of the words; and another for recognition of the user's hand, taking in consideration the classification of 2 gestures: Open and Closed hand, where a 96.25% accuracy was achieved. With those networks, tests in real time are performed, presenting results in the delivery of each tool with a 100% of accuracy, i.e. the robot was able to identify correctly what the user requested, recognize correctly each tool and deliver the one need when the user opened their hand, taking an average time of 45 seconds in the execution of the application
    • …
    corecore