24 research outputs found

    Adaptive sensorimotor peripersonal space representation and motor learning for a humanoid robot

    Get PDF
    This thesis presents possible computational mechanisms by which a humanoid robot can develop a coherent representation of the space within its reach (its peripersonal space), and use it to control its movements. Those mechanisms are inspired by current theories of peripersonal space representation and motor control in humans, targeting a cross-fertilization between robotics on one side, and cognitive science on the other side. This research addresses the issue of adaptivity the sensorimotor level, at the control level and at the level of simple task learning. First, this work considers the concept of body schema and suggests a computational translation of this concept, appropriate for controlling a humanoid robot. This model of the body schema is adaptive and evolves as a result of the robot sensory experience. It suggests new avenues for understanding various psychophysical and neuropsychological phenomenons of human peripersonal space representation such as adaptation to distorted vision and tool use, fake limbs experiments, body-part centered receptive fields, and multimodal neurons. Second, it is shown how the motor modality can be added to the body schema. The suggested controller is inspired by the dynamical system theory of motor control and allows the robot to simultaneously and robustly control its limbs in joint angles space and in end-effector location space. This amounts to controlling the robot in both proprioceptive and visual modalities. This multimodal control can benefit from the advantages offered by each modality and is better than traditional robotic controllers in several respects. It offers a simple and elegant solution to the singularity and joint limit avoidance problems and can be seen as a generalization of the Damped Least Square approach to robot control. The controller exhibits several properties of human reaching movements, such as quasi-straight hand paths and bell-shaped velocity profiles and non-equifinality. In a third step, the motor modalities is endowed with a statistical learning mechanism, based on Gaussian Mixture Models, that enables the humanoid to learn motor primitives from demonstrations. The robot is thus able to learn simple manipulation tasks and generalize them to various context, in a way that is robust to perturbations occurring during task execution. In addition to simulation results, the whole model has been implemented and validated on two humanoid robots, the Hoap3 and the iCub, enabling them to learn their arm and head geometries, perform reaching movements, adapt to unknown tools, and visual distortions, and learn simple manipulation tasks in a smooth, robust and adaptive way. Finally, this work hints at possible computational interpretations of the concepts of body schema, motor perception and motor primitives

    From humans to humanoids: The optimal control framework

    Get PDF
    AbstractIn the last years of research in cognitive control, neuroscience and humanoid robotics have converged to different frameworks which aim, on one side, at modeling and analyzing human motion, and, on the other side, at enhancing motor abilities of humanoids. In this paper we try to cover the gap between the two areas, giving an overview of the literature in the two fields which concerns the production of movements. First, we survey computational motor control models based on optimality principles; then, we review available implementations and techniques to transfer these principles to humanoid robots, with a focus on the limitations and possible improvements of the current implementations. Moreover, we propose Stochastic Optimal Control as a framework to take into account delays and noise, thus catching the unpredictability aspects typical of both humans and humanoids systems. Optimal Control in general can also easily be integrated with Machine Learning frameworks, thus resulting in a computational implementation of human motor learning. This survey is mainly addressed to roboticists attempting to implement human-inspired controllers on robots, but can also be of interest for researchers in other fields, such as computational motor control

    Estudio e implementación de algoritmos de fusión sensorial para sensores pulsantes y clásicos con protocolo AER de comunicación y aplicación en sistemas robóticos neuroinspirados

    Get PDF
    The objective of this thesis is to analyze, design, simulate and implement a model that follows the principles of the human nervous system when a reaching movement is made. The background of the thesis is the neuromorphic engineering field. This term was first coined in the late eighties by Caver Mead. Its main objective is to develop hardware devices, based on the neuron as the basic unit, to develop a range of tasks such as: decision making, image processing, learning, etc. During the last twenty years, this field of research has gathered a large number of researchers around the world. Spike-based sensors and devices that perform spike processing tasks have been developed. A neuro-inspired controller model based on the classic algorithms VITE and FLETE is proposed in this thesis (specifically, the two algorithms presented are: the VITE model which generates a non-planned trajectory and the FLETE model to generate the forces needed to hold a position reached). The hardware platforms used to implement them are a FPGA and a VLSI multi-chip setup. Then, considering how a reaching movement is performed by humans, these algorithms are translated under the constraints of each hardware device. The constraints are: spike-processing blocks described in VHDL for the FPGA and neurons LIF for the VLSI chips. To reach a successful translation of VITE algorithm under the constraints of the FPGA, a new spike-processing block is designed, simulated and implemented: GO Block. On the other hand, to perform an accurate translation of the VITE algorithm under VLSI requirements, the recent biological advances are studied. Then, a model which implements the co-activation of NMDA channels (this activity is related to the activity detected in the basal ganglia short time before a movement is made) is modeled, simulated and implemented. Once the model is defined for both platforms, it is simulated using the Matlab Simulink environment for FPGA and Brian simulator for VLSI chips. The hardware results of the algorithms translated are presented. The open-loop spike-based VITE (on both platforms) and closed-loop (FPGA) applied and connected to a robotic platform using the AER bus show an excellent behaviour in terms of power and resources consumption. They show also an accurate and precise functioning for reaching and tracking movements when the target is supplied by an AER retina or jAER. Thus, a full neuro-inspired architecture is implemented: from the sensor (retina) to the end effector (robot) going through the neuro-inspired controller designed. An alternative for the SVITE platform is also presented. A random element is added to the neuron model to include variability in the neural response. The results obtained for this variant, show a similar behaviour if a comparison with the deterministic algorithms is made. The possibility to include this pseudo-random controller in noise and / or random environment is demonstrated. Finally, this thesis claims that PFM is the most suitable modulation to drive motors in a neuromorphic hardware environment. It allows supplying the events directly to the motors. Furthermore, it is achieved that the system is not affected by spurious or noisy events. The novel results achieved with the VLSI multi-chip setup, this is the first attempt to control a robotic platform using sub-thresold low-power neurons, intended to set the basis for designing neuro-inspired controllers

    Motion planning and reactive control on learnt skill manifolds

    Get PDF
    We propose a novel framework for motion planning and control that is based on a manifold encoding of the desired solution set. We present an alternate, model-free, approach to path planning, replanning and control. Our approach is founded on the idea of encoding the set of possible trajectories as a skill manifold, which can be learnt from data such as from demonstration. We describe the manifold representation of skills, a technique for learning from data and a method for generating trajectories as geodesics on such manifolds. We extend the trajectory generation method to handle dynamic obstacles and constraints. We show how a state metric naturally arises from the manifold encoding and how this can be used for reactive control in an on-line manner. Our framework tightly integrates learning, planning and control in a computationally efficient representation, suitable for realistic humanoid robotic tasks that are defined by skill specifications involving high-dimensional nonlinear dynamics, kinodynamic constraints and non-trivial cost functions, in an optimal control setting. Although, in principle, such problems can be handled by well understood analytical methods, it is often difficult and expensive to formulate models that enable the analytical approach. We test our framework with various types of robotic systems – ranging from a 3-link arm to a small humanoid robot – and show that the manifold encoding gives significant improvements in performance without loss of accuracy. Furthermore, we evaluate the framework against a state-of-the-art imitation learning method. We show that our approach, by learning manifolds of robotic skills, allows for efficient planning and replanning in changing environments, and for robust and online reactive control

    Deep Incremental Learning for Object Recognition

    Get PDF
    In recent years, deep learning techniques received great attention in the field of information technology. These techniques proved to be particularly useful and effective in domains like natural language processing, speech recognition and computer vision. In several real world applications deep learning approaches improved the state-of-the-art. In the field of machine learning, deep learning was a real revolution and a number of effective techniques have been proposed for both supervised and unsupervised learning and for representation learning. This thesis focuses on deep learning for object recognition, and in particular, it addresses incremental learning techniques. With incremental learning we denote approaches able to create an initial model from a small training set and to improve the model as new data are available. Using temporal coherent sequences proved to be useful for incremental learning since temporal coherence also allows to operate in unsupervised manners. A critical point of incremental learning is called forgetting which is the risk to forget previously learned patterns as new data are presented. In the first chapters of this work we introduce the basic theory on neural networks, Convolutional Neural Networks and incremental learning. CNN is today one of the most effective approaches for supervised object recognition; it is well accepted by the scientific community and largely used by ICT big players like Google and Facebook: relevant applications are Facebook face recognition and Google image search. The scientific community has several (large) datasets (e.g., ImageNet) for the development and evaluation of object recognition approaches. However very few temporally coherent datasets are available to study incremental approaches. For this reason we decided to collect a new dataset named TCD4R (Temporal Coherent Dataset For Robotics)

    User modelling for robotic companions using stochastic context-free grammars

    Get PDF
    Creating models about others is a sophisticated human ability that robotic companions need to develop in order to have successful interactions. This thesis proposes user modelling frameworks to personalise the interaction between a robot and its user and devises novel scenarios where robotic companions may apply these user modelling techniques. We tackle the creation of user models in a hierarchical manner, using a streamlined version of the Hierarchical Attentive Multiple-Models for Execution and Recognition (HAMMER) architecture to detect low-level user actions and taking advantage of Stochastic Context-Free Grammars (SCFGs) to instantiate higher-level models which recognise uncertain and recursive sequences of low-level actions. We discuss a couple of distinct scenarios for robotic companions: a humanoid sidekick for power-wheelchair users and a companion of hospital patients. Next, we address the limitations of the previous scenarios by applying our user modelling techniques and designing two further scenarios that fully take advantage of the user model. These scenarios are: a wheelchair driving tutor which models the user abilities, and the musical collaborator which learns the preferences of its users. The methodology produced interesting results in all scenarios: users preferred the actual robot over a simulator as a wheelchair sidekick. Hospital patients rated positively their interactions with the companion independently of their age. Moreover, most users agreed that the music collaborator had become a better accompanist with our framework. Finally, we observed that users' driving performance improved when the robotic tutor instructed them to repeat a task. As our workforce ages and the care requirements in our society grow, robots will need to play a role in helping us lead better lives. This thesis shows that, through the use of SCFGs, adaptive user models may be generated which then can be used by robots to assist their users.Open Acces

    Adaptive Robot Framework: Providing Versatility and Autonomy to Manufacturing Robots Through FSM, Skills and Agents

    Get PDF
    207 p.The main conclusions that can be extracted from an analysis of the current situation and future trends of the industry,in particular manufacturing plants, are the following: there is a growing need to provide customization of products, ahigh variation of production volumes and a downward trend in the availability of skilled operators due to the ageingof the population. Adapting to this new scenario is a challenge for companies, especially small and medium-sizedenterprises (SMEs) that are suffering first-hand how their specialization is turning against them.The objective of this work is to provide a tool that can serve as a basis to face these challenges in an effective way.Therefore the presented framework, thanks to its modular architecture, allows focusing on the different needs of eachparticular company and offers the possibility of scaling the system for future requirements. The presented platform isdivided into three layers, namely: interface with robot systems, the execution engine and the application developmentlayer.Taking advantage of the provided ecosystem by this framework, different modules have been developed in order toface the mentioned challenges of the industry. On the one hand, to address the need of product customization, theintegration of tools that increase the versatility of the cell are proposed. An example of such tools is skill basedprogramming. By applying this technique a process can be intuitively adapted to the variations or customizations thateach product requires. The use of skills favours the reuse and generalization of developed robot programs.Regarding the variation of the production volumes, a system which permits a greater mobility and a faster reconfigurationis necessary. If in a certain situation a line has a production peak, mechanisms for balancing the loadwith a reasonable cost are required. In this respect, the architecture allows an easy integration of different roboticsystems, actuators, sensors, etc. In addition, thanks to the developed calibration and set-up techniques, the system canbe adapted to new workspaces at an effective time/cost.With respect to the third mentioned topic, an agent-based monitoring system is proposed. This module opens up amultitude of possibilities for the integration of auxiliary modules of protection and security for collaboration andinteraction between people and robots, something that will be necessary in the not so distant future.For demonstrating the advantages and adaptability improvement of the developed framework, a series of real usecases have been presented. In each of them different problematic has been resolved using developed skills,demonstrating how are adapted easily to the different casuistic

    WHEN A CHILD MEETS A ROBOT: THE PSYCHOLOGICAL FACTORS THAT MAKE INTERACTION POSSIBLE

    Get PDF
    La presente tesi di dottorato, dal titolo When a child meets a robot: the psychological factors that make interaction possible, vuole introdurre ed esplorare un campo abbastanza nuovo della ricerca psicologica che si occupa specificatamente dell’interazione bambino-robot, presentando le molteplici sfaccettature che la caratterizzano e i principali costrutti psicologici come la Teoria della Mente (ToM), la fiducia, la relazione di attaccamento, non tralasciando, inoltre, la valenza del design dei robot utilizzati nelle ricerche che hanno reso possibile questa tesi. I lavori di ricerca, qui presentati, vanno ad esplorare i costrutti sopra citati, indagandoli in profondità e con curiosità scientifica. Nel dettaglio, sono presentate le seguenti ricerche: 1) Shall I trust you? From Child-Robot Interaction To Trusting Relationship, 2) Can a robot lie? The role of false belief and intentionality understanding in children aged 5 and 6 years, 3) A robot is not worth another: exploring Children’s Mental State Attribution to Different Humanoid Robots, 4) Coding with me: exploring the effect of coding intervention on preschoolers’cognitive skills. Infine, nell’ultimo capitolo, relativo alle conclusioni, viene sviluppata una riflessione teorica riguardante il ruolo rilevante assunto dallo sviluppo cognitivo dei bambini nei processi interazionali con gli agenti robotici.This PhD thesis, title When a child meets a robot: the psychological factors that make interaction possible, wants to introduce and explore a fairly new field of psychological research that specifically deals with the child-robot interaction, presenting the many facets that characterize it and the main psychological constructs such as the Theory of Mind (ToM), trust, attachment relationship, without neglecting, moreover, the value of the design of the robots used in the studies that have made this thesis possible. The research works, presented here, exploring the constructs mentioned above, investigating them in depth and with scientific curiosity. In detail, the following studies are presented: 1) Shall I trust you? From Child-Robot Interaction To Trusting Relationship, 2) Can a robot lie? The role of false belief and intentionality understanding in children aged 5 and 6 years, 3) A robot is not worth another: exploring Children’s Mental State Attribution to Different Humanoid Robots, 4) Coding with me: exploring the effect of a coding intervention on preschoolers’cognitive skills. Finally, in the last chapter, concerning the conclusions, a theoretical reflection is developed regarding the relevant role assumed by children's cognitive development in interactional processes with robotic agents
    corecore