211 research outputs found

    Online Ensemble Learning of Sensorimotor Contingencies

    Get PDF
    Forward models play a key role in cognitive agents by providing predictions of the sensory consequences of motor commands, also known as sensorimotor contingencies (SMCs). In continuously evolving environments, the ability to anticipate is fundamental in distinguishing cognitive from reactive agents, and it is particularly relevant for autonomous robots, that must be able to adapt their models in an online manner. Online learning skills, high accuracy of the forward models and multiple-step-ahead predictions are needed to enhance the robots’ anticipation capabilities. We propose an online heterogeneous ensemble learning method for building accurate forward models of SMCs relating motor commands to effects in robots’ sensorimotor system, in particular considering proprioception and vision. Our method achieves up to 98% higher accuracy both in short and long term predictions, compared to single predictors and other online and offline homogeneous ensembles. This method is validated on two different humanoid robots, namely the iCub and the Baxter

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    Developmental learning of internal models for robotics

    No full text
    Abstract: Robots that operate in human environments can learn motor skills asocially, from selfexploration, or socially, from imitating their peers. A robot capable of doing both can be more ~daptiveand autonomous. Learning by imitation, however, requires the ability to understand the actions ofothers in terms ofyour own motor system: this information can come from a robot's own exploration. This thesis investigates the minimal requirements for a robotic system than learns from both self-exploration and imitation of others. .Through self.exploration and computer vision techniques, a robot can develop forward 'models: internal mo'dels of its own motor system that enable it to predict the consequences of its actions. Multiple forward models are learnt that give the robot a distributed, causal representation of its motor system. It is demon~trated how a controlled increase in the complexity of these forward models speeds up the robot's learning. The robot can determine the uncertainty of its forward models, enabling it to explore so as to improve the accuracy of its???????predictions. Paying attention fO the forward models according to how their uncertainty is changing leads to a development in the robot's exploration: its interventions focus on increasingly difficult situations, adapting to the complexity of its motor system. A robot can invert forward models, creating inverse models, in order to estimate the actions that will achieve a desired goal. Switching to socialleaming. the robot uses these inverse model~ to imitate both a demonstrator's gestures and the underlying goals of their movement.Imperial Users onl

    Behavior-grounded multi-sensory object perception and exploration by a humanoid robot

    Get PDF
    Infants use exploratory behaviors to learn about the objects around them. Psychologists have theorized that behaviors such as touching, pressing, lifting, and dropping enable infants to form grounded object representations. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a ``question\u27\u27 to the object, which is subsequently ``answered by the sensory stimuli produced during the execution of the behavior. In contrast, most object representations used by robots today rely solely on computer vision or laser scan data, gathered through passive observation. Such disembodied approaches to robotic perception may be useful for recognizing an object using a 3D model database, but nevertheless, will fail to infer object properties that cannot be detected using vision alone. To bridge this gap, this dissertation introduces a framework for object perception and exploration in which the robot\u27s representation of objects is grounded in its own sensorimotor experience with them. In this framework, an object is represented by sensorimotor contingencies that span a diverse set of exploratory behaviors and sensory modalities. The results from several large-scale experimental studies show that the behavior-grounded object representation enables a robot to solve a wide variety of tasks including recognition of objects based on the stimuli that they produce, object grouping and sorting, and learning category labels that describe objects and their properties

    The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies

    Get PDF
    Is there a single principle by which neural operations can account for perception, cognition, action, and even consciousness? A strong candidate is now taking shape in the form of “predictive processing”. On this theory, brains engage in predictive inference on the causes of sensory inputs by continuous minimization of prediction errors or informational “free energy”. Predictive processing can account, supposedly, not only for perception, but also for action and for the essential contribution of the body and environment in structuring sensorimotor interactions. In this paper I draw together some recent developments within predictive processing that involve predictive modelling of internal physiological states (interoceptive inference), and integration with “enactive” and “embodied” approaches to cognitive science (predictive perception of sensorimotor contingencies). The upshot is a development of predictive processing that originates, not in Helmholtzian perception-as-inference, but rather in 20th-century cybernetic principles that emphasized homeostasis and predictive control. This way of thinking leads to (i) a new view of emotion as active interoceptive inference; (ii) a common predictive framework linking experiences of body ownership, emotion, and exteroceptive perception; (iii) distinct interpretations of active inference as involving disruptive and disambiguatory—not just confirmatory—actions to test perceptual hypotheses; (iv) a neurocognitive operationalization of the “mastery of sensorimotor contingencies” (where sensorimotor contingencies reflect the rules governing sensory changes produced by various actions); and (v) an account of the sense of subjective reality of perceptual contents (“perceptual presence”) in terms of the extent to which predictive models encode potential sensorimotor relations (this being “counterfactual richness”). This is rich and varied territory, and surveying its landmarks emphasizes the need for experimental tests of its key contributions

    Embodied neuromorphic intelligence

    Full text link
    The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies – from perception to motor control – represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations
    • …
    corecore