337 research outputs found

    A new neural architecture based on ART and AVITE models for anticipatory sensory-motor coordination in robotics

    Get PDF
    In this paper a novel sensory-motor neural controller applied to robotic systems for reaching and tracking targets is proposed. It is based on how the human system projects the sensorial stimulus over the motor joints, sending motor commands to each articulation and avoiding, in most phases of the movement, the feedback of the visual information. In this way, the proposed neural architecture autonomously generates a learning cells structure based on the adaptive resonance theory, together with a neural mapping of the sensory-motor coordinate systems in each cell of the arm workspace. It permits a fast openloop control based on propioceptive information of a robot and a precise grasping position in each cell by mapping 3D spatial positions over redundant joints. The proposed architecture has been trained, implemented and tested in a visuo-motor robotic platform. Robustness, precision and velocity characteristics have been validated.This work was supported in part by the SENECA Fundation (Spain) PCMC75/ 00078/FS/02, and the Spanish Science & Technology Ministry (MCYT) under TIC 2003-08164-C03-03 research project

    DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information

    Get PDF
    Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators

    Boosting precision crop protection towards agriculture 5.0 via machine learning and emerging technologies: A contextual review

    Get PDF
    Crop protection is a key activity for the sustainability and feasibility of agriculture in a current context of climate change, which is causing the destabilization of agricultural practices and an increase in the incidence of current or invasive pests, and a growing world population that requires guaranteeing the food supply chain and ensuring food security. In view of these events, this article provides a contextual review in six sections on the role of artificial intelligence (AI), machine learning (ML) and other emerging technologies to solve current and future challenges of crop protection. Over time, crop protection has progressed from a primitive agriculture 1.0 (Ag1.0) through various technological developments to reach a level of maturity closelyin line with Ag5.0 (section 1), which is characterized by successfully leveraging ML capacity and modern agricultural devices and machines that perceive, analyze and actuate following the main stages of precision crop protection (section 2). Section 3 presents a taxonomy of ML algorithms that support the development and implementation of precision crop protection, while section 4 analyses the scientific impact of ML on the basis of an extensive bibliometric study of >120 algorithms, outlining the most widely used ML and deep learning (DL) techniques currently applied in relevant case studies on the detection and control of crop diseases, weeds and plagues. Section 5 describes 39 emerging technologies in the fields of smart sensors and other advanced hardware devices, telecommunications, proximal and remote sensing, and AI-based robotics that will foreseeably lead the next generation of perception-based, decision-making and actuation systems for digitized, smart and real-time crop protection in a realistic Ag5.0. Finally, section 6 highlights the main conclusions and final remarks

    Recognition of gait patterns in human motor disorders using a machine learning approach

    Get PDF
    Dissertação de mestrado em Industrial Electronics and Computers EngineeringWith advanced age, the occurrence of motor disturbances becomes more prevalent and can lead to gait pathologies, increasing the risk of falls. Currently, there are many available gait monitoring systems that can aid in gait disorder diagnosis by extracting relevant data from a subject’s gait. This increases the amount of data to be processed in working time. To accelerate this process and provide an objective tool for a systematic clinical diagnosis support, Machine Learning methods are a powerful addition capable of processing great amounts of data and uncover non-linear relationships in data. The purpose of this dissertation is the development of a gait pattern recognition system based on a Machine Learning approach for the support of clinical diagnosis of post-stroke gait. This includes the development of a data estimation tool capable of computing several features from inertial sensors. Four different neural networks were be added to the classification tool: Feed-Forward (FFNN), convolutional (CNN) and two recurrent neural networks (LSTM and CLSTM). The performance of all classification models was analyzed and compared in order to select the most effective method of gait analysis. The performance metric used is Matthew’s Correlation Coefficient. The classifiers that exhibit the best performance where Support Vector Machines (SVM), k-Nearest Neighbors (KNN), CNN, LSTM and CLSTM, with a Matthew’s correlation coeficient of 1 in the test set. Despite the first two classifiers reaching the same performance of the three neural networks, the later reached this performance systematically and without the need of explicit dimensionality reduction methods.Com o avançar da idade, a ocorrência de distúrbios motores torna-se mais prevalente, conduzindo a patologias na marcha e aumentando o risco de quedas. Atualmente, muitos sistemas de monitorização de marcha extraem grandes quantidades de dados biomecânicos para apoio ao diagnóstico clínico, aumentando a quantidade de dados a ser processados em tempo útil. Para acelerar esse processo e proporcionar uma ferramenta objetiva de apoio sistemático ao diagnóstico clínico, métodos de Machine Learning são uma poderosa adição, processando grandes quantidades de dados e descobrindo relações não-lineares entre dados. Esta dissertação tem o objetivo de desenvolver um sistema de reconhecimento de padrões de marcha com uma abordagem de Machine Learning para apoio ao diagnóstico clínico da marcha de vitimas de AVC. Isso inclui o desenvolvimento de uma ferramenta de estimação de dados biomecânicos e cálculo de features, a partir de sensores inerciais. Quatro redes neuronais foram implementadas numa ferramenta de classificação: uma rede Feed-Forward (FFNN), uma convolucinal (CNN), e duas redes recorrentes (LSTM e CLSTM). O desempenho de todos os modelos de classificação foi analisado. A métrica de desempenho usada é o coeficiente de correlação de Matthew. Os classificadores com melhor performance foram: Support Vector Machines (SVM), k-Nearest Neighbors (KNN), CNN, LSTM e CLSTM. Todos com uma performance igual a 1 no conjunto de teste. Apesar de os dois primeiros classificadores atingirem a mesma performance das redes neuronais, estas atingiram esta performance repetidamente e sem necessitar de métodos de redução de dimensionalidade

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information

    Get PDF
    Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators

    Robotic Trajectory Tracking: Position- and Force-Control

    Get PDF
    This thesis employs a bottom-up approach to develop robust and adaptive learning algorithms for trajectory tracking: position and torque control. In a first phase, the focus is put on the following of a freeform surface in a discontinuous manner. Next to resulting switching constraints, disturbances and uncertainties, the case of unknown robot models is addressed. In a second phase, once contact has been established between surface and end effector and the freeform path is followed, a desired force is applied. In order to react to changing circumstances, the manipulator needs to show the features of an intelligent agent, i.e. it needs to learn and adapt its behaviour based on a combination of a constant interaction with its environment and preprogramed goals or preferences. The robotic manipulator mimics the human behaviour based on bio-inspired algorithms. In this way it is taken advantage of the know-how and experience of human operators as their knowledge is translated in robot skills. A selection of promising concepts is explored, developed and combined to extend the application areas of robotic manipulators from monotonous, basic tasks in stiff environments to complex constrained processes. Conventional concepts (Sliding Mode Control, PID) are combined with bio-inspired learning (BELBIC, reinforcement based learning) for robust and adaptive control. Independence of robot parameters is guaranteed through approximated robot functions using a Neural Network with online update laws and model-free algorithms. The performance of the concepts is evaluated through simulations and experiments. In complex freeform trajectory tracking applications, excellent absolute mean position errors (<0.3 rad) are achieved. Position and torque control are combined in a parallel concept with minimized absolute mean torque errors (<0.1 Nm)

    Conception et calibration de capteurs de mouvement à film diélectrique pour robots souples multi-degrés de liberté

    Get PDF
    Les robots souples pourraient permettre une interaction homme-robot intrinsèquement sécuritaire car ils sont fabriqués de matériaux déformables. Les capteurs de mouvement adaptés aux robots souples doivent être compatibles avec les mécanismes déformables comportant plusieurs degrés de liberté (DDL) retrouvés sur les robots souples. Le projet de recherche propose des outils de conception pour ce nouveau genre de systèmes de capteurs de mouvement. Pour démontrer ces outils, un système de capteurs est conçu pour un robot souple existant servant pour des interventions chirurgicales guidées par imagerie. De plus, un algorithme de calibration utilisant des techniques d’apprentissage automatique est proposé pour les capteurs à plusieurs DDL. Un prototype du système de capteurs conçu est fabriqué et installé sur le robot souple existant. Lors d’essais expérimentaux, le prototype du système de capteurs atteint une précision moyenne de 0.3 mm et minimale de 1.2 mm

    A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention Task

    Get PDF
    International audienceIn this work we tackle the problem of child engagement estimation while children freely interact with a robot in a friendly, room-like environment. We propose a deep-based multi-view solution that takes advantage of recent developments in human pose detection. We extract the child's pose from different RGB-D cameras placed regularly in the room, fuse the results and feed them to a deep neural network trained for classifying engagement levels. The deep network contains a recurrent layer, in order to exploit the rich temporal information contained in the pose data. The resulting method outperforms a number of baseline classifiers, and provides a promising tool for better automatic understanding of a child's attitude, interest and attention while cooperating with a robot. The goal is to integrate this model in next generation social robots as an attention monitoring tool during various Child Robot Interaction (CRI) tasks both for Typically Developed (TD) children and children affected by autism (ASD)

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
    corecore