21 research outputs found

    Bridging the Gap between Probabilistic and Deterministic Models: A Simulation Study on a Variational Bayes Predictive Coding Recurrent Neural Network Model

    Full text link
    The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.Comment: This paper is accepted the 24th International Conference On Neural Information Processing (ICONIP 2017). The previous submission to arXiv is replaced by this version because there was an error in Equation

    A Neurorobotics Simulation of Autistic Behavior Induced by Unusual Sensory Precision

    Get PDF
    Recently, applying computational models developed in cognitive science to psychiatric disorders has been recognized as an essential approach for understanding cognitive mechanisms underlying psychiatric symptoms. Autism spectrum disorder is a neurodevelopmental disorder that is hypothesized to affect information processes in the brain involving the estimation of sensory precision (uncertainty), but the mechanism by which observed symptoms are generated from such abnormalities has not been thoroughly investigated. Using a humanoid robot controlled by a neural network using a precision-weighted prediction error minimization mechanism, it is suggested that both increased and decreased sensory precision could induce the behavioral rigidity characterized by resistance to change that is characteristic of autistic behavior. Specifically, decreased sensory precision caused any error signals to be disregarded, leading to invariability of the robot’s intention, while increased sensory precision caused an excessive response to error signals, leading to fluctuations and subsequent fixation of intention. The results may provide a system-level explanation of mechanisms underlying different types of behavioral rigidity in autism spectrum and other psychiatric disorders. In addition, our findings suggest that symptoms caused by decreased and increased sensory precision could be distinguishable by examining the internal experience of patients and neural activity coding prediction error signals in the biological brain

    Robot Learning from Human Demonstration: Interpretation, Adaptation, and Interaction

    Get PDF
    Robot Learning from Demonstration (LfD) is a research area that focuses on how robots can learn new skills by observing how people perform various activities. As humans, we have a remarkable ability to imitate other human’s behaviors and adapt to new situations. Endowing robots with these critical capabilities is a significant but very challenging problem considering the complexity and variation of human activities in highly dynamic environments. This research focuses on how robots can learn new skills by interpreting human activities, adapting the learned skills to new situations, and naturally interacting with humans. This dissertation begins with a discussion of challenges in each of these three problems. A new unified representation approach is introduced to enable robots to simultaneously interpret the high-level semantic meanings and generalize the low-level trajectories of a broad range of human activities. An adaptive framework based on feature space decomposition is then presented for robots to not only reproduce skills, but also autonomously and efficiently adjust the learned skills to new environments that are significantly different from demonstrations. To achieve natural Human Robot Interaction (HRI), this dissertation presents a Recurrent Neural Network based deep perceptual control approach, which is capable of integrating multi-modal perception sequences with actions for robots to interact with humans in long-term tasks. Overall, by combining the above approaches, an autonomous system is created for robots to acquire important skills that can be applied to human-centered applications. Finally, this dissertation concludes with a discussion of future directions that could accelerate the upcoming technological revolution of robot learning from human demonstration

    Learning What To Say And What To Do: A Model For Grounding Language And Actions

    Get PDF
    Automation is becoming increasingly important in nowadays society, with robots performing a lot of repetitive tasks in industry and even entering our households in the form of vacuum cleaners and lawn mowers. When considering regular tasks outside of the controlled environments of industry, robots tend to perform poorly. In particular, in situations where robots have to interact with humans, a problem arises: how can a robot understand what the human means? While a lot of work has been made in the past towards visual perception and classification of objects, but understanding what action a verb translates into has still been an unexplored area. In solving this challenge, we would enable robots to execute commands given in natural language, and also to verbalise what actions they are performing when prompted. This work studies how a robot can learn the meaning behind the sentences humans use, how it translates into its perception and the real world, but also how to translate its actions into sentences humans understand. To achieve this we propose a novel Bidirectional machine learning model, along with a data collection module that can be used by non-technical users. The main idea behind this model is the ability to generalise to novel concepts, being able to compose new sentences and actions from what it learned previously. Humans show this ability to generalise from a young age, and it is a desirable feature for this model. By using humans natural teaching instincts to teach the robot together with this generalisation ability we hope to obtain a model that allows people everywhere to teach the robot to perform the actions we desire. We validate the model in a number of tasks, using an iCub and Pepper robots physically interacting with objects in order to complete a natural language command. We test different actions, including motor actions and emotional displays, while using both transitive and intransitive verbs in the natural language commands. The main contribution of this thesis is the development of a Bidirectional Learning Algorithm, applied to a Multiple Timescale Recurrent Neural Network enabling these models to link action and language in a bidirectional way. A second contribution sees the extension of Multiple Timescale architectures to Long Short-Term Memory models, increasing the capabilities of these models. Finally the third contribution is in the form of data collection modules, with the development of an easy-to-use module based on physical interaction and speech to provide the iCub and Pepper robots with the data to be learned

    Coordination dynamics in the sensorimotor loop

    Get PDF
    The last two decades have witnessed radical changes of perspective about the nature of intelligence and cognition, leaving behind some of the assumptions of computational functionalism. From the myriad of approaches seeking to substitute the old rule-based symbolic perception of mind, we are especially interested in two of them. The first is Embodied and Situated Cognition, where the advances in modeling complex adaptive systems through computer simulations have reconfigured the way in which mechanistic, embodied and interactive explanations can conceptualize the mind. We are particularly interested in the concept of sensorimotor loop, which brings a new perspective about what is needed for a meaningful interaction with the environment, emphasizing the role of the coordination of effector and sensor activities while performing a concrete task. The second one is the framework of Coordination Dynamics, which has been developed as a result of the increasing focus of neuroscience on self-organized oscillatory brain dynamics. It provides formal tools to study the mechanisms through which complex biological systems stabilize coordination states under conditions in which they would otherwise become unstable. We will merge both approaches and define coordination in the sensorimotor loop as the main phenomena behind the emergence of cognitive behavior. At the same time, we will provide methodological tools and concepts to address this hypothesis. Finally, we will present two case studies based on the proposed approach: 1. We will study the phenomenon known as “intermittent behavior”, which is observed in organisms at different levels (from microorganisms to higher animals). We will propose a model that understands intermittent behavior as a general strategy of biologica organization when an organism has to adapt to complex changing environments, and would allow to establish effective sensorimotor loops even in situations of instable engagement with the world. 2. We will perform a simulation of a phonotaxis task performed by an agent with an oscillator network as neural controller. The objective will be to characterize robust adaptive coupling between perceptive activity and the environmental dynamics just through phase information processing. We will observe how the robustness of the coupling crucially depends of how the sensorimotor loop structures and constrains both the emergent neural and behavioral patterns. We will hypothesize that this structuration of the sensorimotor space, in which only meaningful behavioral patterns can be stabilized, is a key ingredient for the emergence of higher cognitive abilities

    Shared Perception in Human-Robot Interaction

    Get PDF
    Interaction can be seen as a composition of perspectives: the integration of perceptions, intentions, and actions on the environment two or more agents share. For an interaction to be effective, each agent must be prone to “sharedness”: being situated in a common environment, able to read what others express about their perspective, and ready to adjust one’s own perspective accordingly. In this sense, effective interaction is supported by perceiving the environment jointly with others, a capability that in this research is called Shared Perception. Nonetheless, perception is a complex process that brings the observer receiving sensory inputs from the external world and interpreting them based on its own, previous experiences, predictions, and intentions. In addition, social interaction itself contributes to shaping what is perceived: others’ attention, perspective, actions, and internal states may also be incorporated into perception. Thus, Shared perception reflects the observer's ability to integrate these three sources of information: the environment, the self, and other agents. If Shared Perception is essential among humans, it is equally crucial for interaction with robots, which need social and cognitive abilities to interact with humans naturally and successfully. This research deals with Shared Perception within the context of Social Human-Robot Interaction (HRI) and involves an interdisciplinary approach. The two general axes of the thesis are the investigation of human perception while interacting with robots and the modeling of robot’s perception while interacting with humans. Such two directions are outlined through three specific Research Objectives, whose achievements represent the contribution of this work. i) The formulation of a theoretical framework of Shared Perception in HRI valid for interpreting and developing different socio-perceptual mechanisms and abilities. ii) The investigation of Shared Perception in humans focusing on the perceptual mechanism of Context Dependency, and therefore exploring how social interaction affects the use of previous experience in human spatial perception. iii) The implementation of a deep-learning model for Addressee Estimation to foster robots’ socio-perceptual skills through the awareness of others’ behavior, as suggested in the Shared Perception framework. To achieve the first Research Objective, several human socio-perceptual mechanisms are presented and interpreted in a unified account. This exposition parallels mechanisms elicited by interaction with humans and humanoid robots and aims to build a framework valid to investigate human perception in the context of HRI. Based on the thought of D. Davidson and conceived as the integration of information coming from the environment, the self, and other agents, the idea of "triangulation" expresses the critical dynamics of Shared Perception. Also, it is proposed as the functional structure to support the implementation of socio-perceptual skills in robots. This general framework serves as a reference to fulfill the other two Research Objectives, which explore specific aspects of Shared Perception. For what concerns the second Research Objective, the human perceptual mechanism of Context Dependency is investigated, for the first time, within social interaction. Human perception is based on unconscious inference, where sensory inputs integrate with prior information. This phenomenon helps in facing the uncertainty of the external world with predictions built upon previous experience. To investigate the effect of social interaction on such a mechanism, the iCub robot has been used as an experimental tool to create an interactive scenario with a controlled setting. A user study based on psychophysical methods, Bayesian modeling, and a neural network analysis of human results demonstrated that social interaction influenced Context Dependency so that when interacting with a social agent, humans rely less on their internal models and more on external stimuli. Such results are framed in Shared Perception and contribute to revealing the integration dynamics of the three sources of Shared Perception. The others’ presence and social behavior (other agents) affect the balance between sensory inputs (environment) and personal history (self) in favor of the information shared with others, that is, the environment. The third Research Objective consists of tackling the Addressee Estimation problem, i.e., understanding to whom a speaker is talking, to improve the iCub social behavior in multi-party interactions. Addressee Estimation can be considered a Shared Perception ability because it is achieved by using sensory information from the environment, internal representations of the agents’ position, and, more importantly, the understanding of others’ behavior. An architecture for Addressee Estimation is thus designed considering the integration process of Shared Perception (environment, self, other agents) and partially implemented with respect to the third element: the awareness of others’ behavior. To achieve this, a hybrid deep-learning (CNN+LSTM) model is developed to estimate the speaker-robot relative placement of the addressee based on the non-verbal behavior of the speaker. Addressee Estimation abilities based on Shared Perception dynamics are aimed at improving multi-party HRI. Making robots aware of other agents’ behavior towards the environment is the first crucial step for incorporating such information into the robot’s perception and modeling Shared Perception

    Towards hybrid primary intersubjectivity: a neural robotics library for human science

    Get PDF
    Human-robot interaction is becoming an interesting area of research in cognitive science, notably, for the study of social cognition. Interaction theorists consider primary intersubjectivity a non-mentalist, pre-theoretical, non-conceptual sort of processes that ground a certain level of communication and understanding, and provide support to higher-level cognitive skills. We argue this sort of low level cognitive interaction, where control is shared in dyadic encounters, is susceptible of study with neural robots. Hence, in this work we pursue three main objectives. Firstly, from the concept of active inference we study primary intersubjectivity as a second person perspective experience characterized by predictive engagement, where perception, cognition, and action are accounted for an hermeneutic circle in dyadic interaction. Secondly, we propose an open-source methodology named \textit{neural robotics library} (NRL) for experimental human-robot interaction, and a demonstration program for interacting in real-time with a virtual Cartesian robot (VCBot). Lastly, through a study case, we discuss some ways human-robot (hybrid) intersubjectivity can contribute to human science research, such as to the fields of developmental psychology, educational technology, and cognitive rehabilitation

    Bootstrapping movement primitives from complex trajectories

    Get PDF
    Lemme A. Bootstrapping movement primitives from complex trajectories. Bielefeld: Bielefeld University; 2014
    corecore