4 research outputs found

    Dynamic movement primitives based cloud robotic skill learning for point and non-point obstacle avoidance

    Get PDF
    Purpose: Dynamic movement primitives (DMPs) is a general robotic skill learning from demonstration method, but it is usually used for single robotic manipulation. For cloud-based robotic skill learning, the authors consider trajectories/skills changed by the environment, rebuild the DMPs model and propose a new DMPs-based skill learning framework removing the influence of the changing environment. Design/methodology/approach: The authors proposed methods for two obstacle avoidance scenes: point obstacle and non-point obstacle. For the case with point obstacles, an accelerating term is added to the original DMPs function. The unknown parameters in this term are estimated by interactive identification and fitting step of the forcing function. Then a pure skill despising the influence of obstacles is achieved. Using identified parameters, the skill can be applied to new tasks with obstacles. For the non-point obstacle case, a space matching method is proposed by building a matching function from the universal space without obstacle to the space condensed by obstacles. Then the original trajectory will change along with transformation of the space to get a general trajectory for the new environment. Findings: The proposed two methods are certified by two experiments, one of which is taken based on Omni joystick to record operator’s manipulation motions. Results show that the learned skills allow robots to execute tasks such as autonomous assembling in a new environment. Originality/value: This is a new innovation for DMPs-based cloud robotic skill learning from multi-scene tasks and generalizing new skills following the changes of the environment

    Experimental Performance of Mobile Robotic System by Involving IoT Technique

    Get PDF
    In this paper, a concept for the mobile robotic system by using enabling IoT technique is described and architecture for enabling the mobile robotic system using web application is also discussed. The mobile robotic system is controlled through web-based applications and the air quality of the environment is also measured. A node Micro Controller Unit (node MCU) for wireless communications is used which is interacted with robots and their microcontroller. This node MCU uploads the data to the cloud network system. The robot can be remotely controlled using web applications and data are sent and stored in the cloud successfully. By demonstrating the robotic system capabilities, it is revealed that web-based applications can be used for controlling and monitoring the robotic system, and data can be stored which can be accessed from anywhere through mobile android applications. Thus, this gives an affordable solution for accessing/monitoring the pipelines/tunnels of coal mines, oil pipelines, etc. where the exploitation of the human being is very difficult. The use of the IoT cloud also facilitates storing data which leads to a new generation of the robotic system

    Guidage et planification réactive de trajectoire d’un drone monoculaire contrôlé par intelligence artificielle

    Get PDF
    RÉSUMÉ Le problème de guidage autonome est un domaine de recherche en constante évolution. La popularisation des drones a étendu ce domaine de recherche au cours des dernières années. La nature de ce type d’engins amène plusieurs nouveaux défis à surmonter, notamment en lien avec la variété d’environnements auxquels ils peuvent être confrontés. Contrairement aux voitures autonomes, les drones se retrouvent souvent dans des milieux inconnus non cartographiés et dépourvus de signal GPS. De nouvelles méthodes ont donc été développées pour mitiger ces défis. Les solutions au problème de guidage autonome dans la littérature peuvent dans ce mémoire de maîtrise être classées dans deux catégories : le guidage réactif localement à des fins d’exploration et le guidage orienté. La première catégorie regroupe les solutions de guidage local d’engins naviguant sans destination précise alors que la seconde regroupe celles de guidage tentant d’atteindre une destination. Les deux catégories de guidage en milieu inconnu utilisent majoritairement des approches incluant l’apprentissage par renforcement ainsi que l’apprentissage par imitation. Cependant, peu d’études abordent le problème de guidage orienté dans des environnements complexes de grandeur nature. L’objectif de ce projet de recherche est donc de concevoir un agent intelligent capable d’imiter la logique de guidage d’un humain dans un environnement inconnu complexe en se basant sur la vision de profondeur et une estimation de sa destination. Une approche utilisant l’apprentissage par imitation est employée pour minimiser les coûts et les temps de calcul. Un environnement de simulation sophistiqué est donc mis sur place afin de créer un ensemble de données pour l’entraînement par imitation. L’ensemble de données qui a été créé comporte 624 trajectoires parmi 9 environnements différents effectuées par un expert suboptimal pour un total de 296 466 paires d’entraînement. L’attributif suboptimal est employé pour qualifier l’humain à imiter puisque ce dernier devra dresser les trajets au meilleur de ses capacités sans avoir recours à des algorithmes de planification de trajectoire optimale. Un modèle de classification capable de prédire la prochaine commande de guidage à effectuer compte tenu des observations actuelles et précédentes a été implémenté. Le modèle est entraîné à encoder une représentation de l’image de profondeur obtenue à partir de l’image RGB ainsi qu’une représentation des coordonnées relative à sa destination. Ces représentations sont traitées par un réseau récurrent à mémoire court et long terme («Long Short-Term Memory» ou LSTM) ainsi qu’un perceptron multicouches («Multilayer Perceptron» ou MLP) afin de prédire la direction à emprunter. Une fonction coût adaptée au problème ainsi que des techniques d’augmentation de l’ensemble de données sont incorporées lors de l’entraînement afin d’améliorer la précision du modèle en validation et en test. Une recherche d’hyperparamètres de type grid search a été effectuée afin de sélectionner le meilleur modèle selon la précision obtenue sur l’ensemble de données de test. Des précisions entre 77.10% et 82.59% ont été atteintes indiquant un impact significatif des méthodes d’augmentation de l’ensemble de données.----------ABSTRACT The autonomous guidance field is a continuously evolving research topic. The popularization of micro aerial vehicles such as quadcopters has contributed to the expansion of this research topic. Because of the wide range of different environments they can navigate into, quadcopters have many challenges on their own. In contrast with autonomous cars, quadcopters will most likely navigate more often in unknown environments with limited or no GPS service. New methods for autonomous guidance were needed for quadcopters. The literature review reveals two main categories relevant to the autonomous guidance problem: locally passive-reactive guidance and oriented guidance. The former includes all forms of guidance not aiming for a specific target while the latter focuses on reaching a destination. Both categories are considering guidance in unknown environments and use mostly reinforcement learning or imitation learning as a solving method. However, most of the studies on autonomous oriented guidance are not executed in a full size, complex environment setting. The objective of this research project is to create an intelligent agent capable of imitating a human guidance policy in a complex and unknown environment based on a depth map image and relative goal inputs. Considering the lower cost in development and computation time, the imitation learning approach was chosen. A sophisticated simulation environment was set up to create an imitation learning datasets. A total of 624 suboptimal demonstration paths from 9 different 3D environments were gathered, which represent 296 466 learning pairs. The demonstrations are qualified as suboptimal since the expert is a human trying its best to solve the guidance problem without any optimal planners. A classification model was introduced for predicting the appropriate guidance command based on the observations over time. The model learned a meaningful representation of its inputs which can be processed by a long short-term memory network (LSTM) followed by a fully connected network. In this way, the depth image obtained from the RGB original image along with the relative coordinates to the destination are converted into a guidance command at each time step. In order to improve the classification accuracy on the test set, a custom loss function and data augmentation techniques were implemented. A grid search over possible combination of dataset augmentation proportions was conveyed to optimize the hyperparameters. Accuracy ranging between 77.10% and 82.59% were obtained for this experiment, revealing a significant dependency to the augmentation technique
    corecore