18 research outputs found
Έλεγχος κινητής πλατφόρμας ενός κινητού συστήματος αποκατάστασης βάδισης
Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Συστήματα Αυτοματισμού
Robot Learning from Human Demonstrations for Human-Robot Synergy
Human-robot synergy enables new developments in industrial and assistive robotics research. In recent years, collaborative robots can work together with humans to perform a task, while sharing the same workplace. However, the teachability of robots is a crucial factor, in order to establish the role of robots as human teammates. Robots require certain abilities, such as easily learning diversified tasks and adapting to unpredicted events. The most feasible method, which currently utilizes human teammate to teach robots how to perform a task, is the Robot Learning from Demonstrations (RLfD). The goal of this method is to allow non-expert users to a programa a robot by simply guiding the robot through a task. The focus of this thesis is on the development of a novel framework for Robot Learning from Demonstrations that enhances the robotsa abilities to learn and perform the sequences of actions for object manipulation tasks (high-level learning) and, simultaneously, learn and adapt the necessary trajectories for object manipulation (low-level learning). A method that automatically segments demonstrated tasks into sequences of actions is developed in this thesis. Subsequently, the generated sequences of actions are employed by a Reinforcement Learning (RL) from human demonstration approach to enable high-level robot learning. The low-level robot learning consists of a novel method that selects similar demonstrations (in case of multiple demonstrations of a task) and the Gaussian Mixture Model (GMM) method. The developed robot learning framework allows learning from single and multiple demonstrations. As soon as the robot has the knowledge of a demonstrated task, it can perform the task in cooperation with the human. However, the need for adaptation of the learned knowledge may arise during the human-robot synergy. Firstly, Interactive Reinforcement Learning (IRL) is employed as a decision support method to predict the sequence of actions in real-time, to keep the human in the loop and to enable learning the usera s preferences. Subsequently, a novel method that modifies the learned Gaussian Mixture Model (m-GMM) is developed in this thesis. This method allows the robot to cope with changes in the environment, such as objects placed in a different from the demonstrated pose or obstacles, which may be introduced by the human teammate. The modified Gaussian Mixture Model is further used by the Gaussian Mixture Regression (GMR) to generate a trajectory, which can efficiently control the robot. The developed framework for Robot Learning from Demonstrations was evaluated in two different robotic platforms: a dual-arm industrial robot and an assistive robotic manipulator. For both robotic platforms, small studies were performed for industrial and assistive manipulation tasks, respectively. Several Human-Robot Interaction (HRI) methods, such as kinesthetic teaching, gamepad or a hands-freea via head gestures, were used to provide the robot demonstrations. The a hands-freea HRI enables individuals with severe motor impairments to provide a demonstration of an assistive task. The experimental results demonstrate the potential of the developed robot learning framework to enable continuous humana robot synergy in industrial and assistive applications
FUZZY CONTROLLER FOR THE CONTROL OF THE MOBILE PLATFORM OF THE CORBYS ROBOTIC GAIT REHABILITATION SYSTEM
In this paper, an inverse kinematics based control algorithm for the joystick control of the mobile platform of the novel mobile robot-assisted gait rehabilitation system CORBYS is presented. The mobile platform has four independently steered and driven wheels. Given the linear and angular velocities of the mobile platform, the inverse kinematics algorithm gives as its output the steering angle and the driving angular velocity of each of the four wheels. The paper is focused on the steering control of the platform for which a fuzzy logic controller is developed and implemented. The experimental results of the real-world steering of the platform are presented in the paper
ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS
Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM), the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR) from the learned Gaussian mixture model. In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully
Roboter lernen von menschlichen Demonstrationen für Mensch-Roboter-Synergie
Human-robot synergy enables new developments in industrial and assistive robotics research. In recent years, collaborative robots can work together with humans to perform a task, while sharing the same workplace. However, the teachability of robots is a crucial factor, in order to establish the role of robots as human teammates. Robots require certain abilities, such as easily learning diversified tasks and adapting to unpredicted events. The most feasible method, which currently utilizes human teammate to teach robots how to perform a task, is the Robot Learning from Demonstrations (RLfD). The goal of this method is to allow non-expert users to a programa a robot by simply guiding the robot through a task. The focus of this thesis is on the development of a novel framework for Robot Learning from Demonstrations that enhances the robotsa abilities to learn and perform the sequences of actions for object manipulation tasks (high-level learning) and, simultaneously, learn and adapt the necessary trajectories for object manipulation (low-level learning). A method that automatically segments demonstrated tasks into sequences of actions is developed in this thesis. Subsequently, the generated sequences of actions are employed by a Reinforcement Learning (RL) from human demonstration approach to enable high-level robot learning. The low-level robot learning consists of a novel method that selects similar demonstrations (in case of multiple demonstrations of a task) and the Gaussian Mixture Model (GMM) method. The developed robot learning framework allows learning from single and multiple demonstrations. As soon as the robot has the knowledge of a demonstrated task, it can perform the task in cooperation with the human. However, the need for adaptation of the learned knowledge may arise during the human-robot synergy. Firstly, Interactive Reinforcement Learning (IRL) is employed as a decision support method to predict the sequence of actions in real-time, to keep the human in the loop and to enable learning the usera s preferences. Subsequently, a novel method that modifies the learned Gaussian Mixture Model (m-GMM) is developed in this thesis. This method allows the robot to cope with changes in the environment, such as objects placed in a different from the demonstrated pose or obstacles, which may be introduced by the human teammate. The modified Gaussian Mixture Model is further used by the Gaussian Mixture Regression (GMR) to generate a trajectory, which can efficiently control the robot. The developed framework for Robot Learning from Demonstrations was evaluated in two different robotic platforms: a dual-arm industrial robot and an assistive robotic manipulator. For both robotic platforms, small studies were performed for industrial and assistive manipulation tasks, respectively. Several Human-Robot Interaction (HRI) methods, such as kinesthetic teaching, gamepad or a hands-freea via head gestures, were used to provide the robot demonstrations. The a hands-freea HRI enables individuals with severe motor impairments to provide a demonstration of an assistive task. The experimental results demonstrate the potential of the developed robot learning framework to enable continuous humana robot synergy in industrial and assistive applications
Patient–Robot Co-Navigation of Crowded Hospital Environments
Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. This paper focuses on enabling a multi-purpose robot to guide patients while walking. The proposed robotic framework aims at enabling a robot to learn how to navigate a crowded hospital environment while maintaining contact with the patient. Two deep reinforcement learning models are developed; the first model considers only dynamic obstacles (e.g., humans), while the second model considers static and dynamic obstacles in the environment. The models output the robot’s velocity based on the following inputs; the patient’s gait velocity, which is computed based on a leg detection method, spatial and temporal information from the environment, the humans in the scene, and the robot. The proposed models demonstrate promising results. Finally, the model that considers both static and dynamic obstacles is successfully deployed in the Gazebo simulation environment
Study of the SDF1 growth factor's role in cancerogenesis
102 σ.Ο καρκίνος εξακολουθεί να αποτελεί μια από τις σημαντικότερες αιτίες θανάτου στον κόσμο. Πρόσφατα, με την ανάπτυξη της βιοπληροφορικής, γίνεται μια προσπάθεια εύρεσης των υπεύθυνων γονιδίων και πρωτεϊνών που συμβάλλουν στην ανάπτυξη του καρκίνου. Στην παρούσα εργασία ασχολούμαστε με τον αυξητικό παράγοντα SDF-1, ο οποίος παίζει σημαντικό ρόλο στην εμβρυική ανάπτυξη, στη ομοιόσταση και στις φλεγμονώδεις νόσους, αλλά είναι και κρίσιμος παράγοντας στην ανάπτυξη καρκινικών όγκων και στη δημιουργία μεταστάσεων. Ο υποδοχέας CXCR4 του αυξητικού παράγοντα SDF-1 εκφράζεται στον καρκίνο του μαστού, του προστάτη, των πνευμόνων, των ωοθηκών, σε αιμοποιητικές κακοήθειες, σε γαστρεντερικούς όγκους και σε άλλα είδη καρκίνου. Η υπερέκφραση του υποδοχέα CXCR4 οδηγεί σε επιθετικούς καρκίνους και σε μεταστάσεις με μικρό ποσοστό επιβίωσης τις περισσότερες φορές. Τα τελευταία χρόνια έχουν περατωθεί μερικές έρευνες για τη σίγηση του υποδοχέα CXCR4, ώστε να αναστέλλεται η έκφραση του SDF-1, σε διάφορα είδη καρκίνου. Τα αποτελέσματα ήταν ενθαρρυντικά αφού μειώθηκε ο ρυθμός ανάπτυξης του όγκου και, κυρίως, παρεμποδίστηκαν οι μεταστάσεις.Cancer to this day remains a leading cause of death worldwide. Recent progress with bioinformatics field, has led to an increase in the discovery of genes and proteins associated with cancer development. This thesis deals with the growth factor SDF-1, which plays an important role in embryonic development, homeostasis and inflammatory diseases, but is also a critical factor in tumor growth and cancer cell metastases. Breast, prostate, lung and, ovarian cancer, myeloid malignancies, gastrointestinal tumors and other cancers, all express the CXCR4 receptor associated with the SDF-1 growth factor. Overexpression of this receptor, leads to aggressive tumors and metastases with a low survival rate in most cases. In recent years there have been studies, where the receptor CXCR4 was silenced in various types of cancer. The results are encouraging, having decreased the rate of tumor growth and most importantly, blocking the development of metastases.Μαρία Α. Κυραρίν
Application of Reinforcement Learning to a Robotic Drinking Assistant
Meal assistant robots form a very important part of the assistive robotics sector since self-feeding is a priority activity of daily living (ADL) for people suffering from physical disabilities like tetraplegia. A quick survey of the current trends in this domain reveals that, while tremendous progress has been made in the development of assistive robots for the feeding of solid foods, the task of feeding liquids from a cup remains largely underdeveloped. Therefore, this paper describes an assistive robot that focuses specifically on the feeding of liquids from a cup using tactile feedback through force sensors with direct human–robot interaction (HRI). The main focus of this paper is the application of reinforcement learning (RL) to learn what the best robotic actions are, based on the force applied by the user. A model of the application environment is developed based on the Markov decision process and a software training procedure is designed for quick development and testing. Five of the commonly used RL algorithms are investigated, with the intention of finding the best fit for training, and the system is tested in an experimental study. The preliminary results show a high degree of acceptance by the participants. Feedback from the users indicates that the assistive robot functions intuitively and effectively
Image-Label Recovery on Fashion Data Using Image Similarity from Triple Siamese Network
Weakly labeled data are inevitable in various research areas in artificial intelligence (AI) where one has a modicum of knowledge about the complete dataset. One of the reasons for weakly labeled data in AI is insufficient accurately labeled data. Strict privacy control or accidental loss may also cause missing-data problems. However, supervised machine learning (ML) requires accurately labeled data in order to successfully solve a problem. Data labeling is difficult and time-consuming as it requires manual work, perfect results, and sometimes human experts to be involved (e.g., medical labeled data). In contrast, unlabeled data are inexpensive and easily available. Due to there not being enough labeled training data, researchers sometimes only obtain one or few data points per category or label. Training a supervised ML model from the small set of labeled data is a challenging task. The objective of this research is to recover missing labels from the dataset using state-of-the-art ML techniques using a semisupervised ML approach. In this work, a novel convolutional neural network-based framework is trained with a few instances of a class to perform metric learning. The dataset is then converted into a graph signal, which is recovered using a recover algorithm (RA) in graph Fourier transform. The proposed approach was evaluated on a Fashion dataset for accuracy and precision and performed significantly better than graph neural networks and other state-of-the-art methods.11