578 research outputs found

    Development of reaching to the body in early infancy: From experiments to robotic models

    Get PDF
    We have been observing how infants between 3 and 21 months react when a vibrotactile stimulation (a buzzer) is applied to different parts of their bodies. Responses included in particular movement of the stimulated body part and successful reaching for and removal of the buzzer. Overall, there is a pronounced developmental progression from general to specific movement patterns, especially in the first year. In this article we review the series of studies we conducted and then focus on possible mechanisms that might explain what we observed. One possible mechanism might rely on the brain extracting “sensorimotor contingencies” linking motor actions and resulting sensory consequences. This account posits that infants are driven by intrinsic motivation that guides exploratory motor activity, at first generating random motor babbling with self-touch occurring spontaneously. Later goal-oriented motor behavior occurs, with self-touch as a possible effective tool to induce informative contingencies. We connect this sensorimotor view with a second possible account that appeals to the neuroscientific concepts of cortical maps and coordinate transformations. In this second account, the improvement of reaching precision is mediated by refinement of neuronal maps in primary sensory and motor cortices—the homunculi—as well as in frontal and parietal corti- cal regions dedicated to sensorimotor processing. We complement this theoretical account with modeling on a humanoid robot with artificial skin where we implemented reaching for tactile stimuli as well as learning the “somatosensory homunculi”. We suggest that this account can be extended to reflect the driving role of sensorimotor contingencies in human development. In our conclusion we consider possible extensions of our current experiments which take account of predictions derived from both these kinds of models

    Development of reaching to the body in early infancy: from experiments to robotic models

    Get PDF
    We have been observing how infants between 3 and 21 months react when a vibrotactile stimulation (a buzzer) is applied to different parts of their bodies. Responses included in particular movement of the stimulated body part and successful reaching for and removal of the buzzer. Overall, there is a pronounced developmental progression from general to specific movement patterns, especially in the first year. In this article we review the series of studies we conducted and then focus on possible mechanisms that might explain what we observed. One possible mechanism might rely on the brain extracting “sensorimotor contingencies” linking motor actions and resulting sensory consequences. This account posits that infants are driven by intrinsic motivation that guides exploratory motor activity, at first generating random motor babbling with self-touch occurring spontaneously. Later goal-oriented motor behavior occurs, with self-touch as a possible effective tool to induce informative contingencies. We connect this sensorimotor view with a second possible account that appeals to the neuroscientific concepts of cortical maps and coordinate transformations. In this second account, the improvement of reaching precision is mediated by refinement of neuronal maps in primary sensory and motor cortices—the homunculi—as well as in frontal and parietal corti- cal regions dedicated to sensorimotor processing. We complement this theoretical account with modeling on a humanoid robot with artificial skin where we implemented reaching for tactile stimuli as well as learning the “somatosensory homunculi”. We suggest that this account can be extended to reflect the driving role of sensorimotor contingencies in human development. In our conclusion we consider possible extensions of our current experiments which take account of predictions derived from both these kinds of models

    SERKET: An Architecture for Connecting Stochastic Models to Realize a Large-Scale Cognitive Model

    Full text link
    To realize human-like robot intelligence, a large-scale cognitive architecture is required for robots to understand the environment through a variety of sensors with which they are equipped. In this paper, we propose a novel framework named Serket that enables the construction of a large-scale generative model and its inference easily by connecting sub-modules to allow the robots to acquire various capabilities through interaction with their environments and others. We consider that large-scale cognitive models can be constructed by connecting smaller fundamental models hierarchically while maintaining their programmatic independence. Moreover, connected modules are dependent on each other, and parameters are required to be optimized as a whole. Conventionally, the equations for parameter estimation have to be derived and implemented depending on the models. However, it becomes harder to derive and implement those of a larger scale model. To solve these problems, in this paper, we propose a method for parameter estimation by communicating the minimal parameters between various modules while maintaining their programmatic independence. Therefore, Serket makes it easy to construct large-scale models and estimate their parameters via the connection of modules. Experimental results demonstrated that the model can be constructed by connecting modules, the parameters can be optimized as a whole, and they are comparable with the original models that we have proposed

    From locomotion to cognition: Bridging the gap between reactive and cognitive behavior in a quadruped robot

    Full text link
    The cognitivistic paradigm, which states that cognition is a result of computation with symbols that represent the world, has been challenged by many. The opponents have primarily criticized the detachment from direct interaction with the world and pointed to some fundamental problems (for instance the symbol grounding problem). Instead, they emphasized the constitutive role of embodied interaction with the environment. This has motivated the advancement of synthetic methodologies: the phenomenon of interest (cognition) can be studied by building and investigating whole brain-body-environment systems. Our work is centered around a compliant quadruped robot equipped with a multimodal sensory set. In a series of case studies, we investigate the structure of the sensorimotor space that the application of different actions in different environments by the robot brings about. Then, we study how the agent can autonomously abstract the regularities that are induced by the different conditions and use them to improve its behavior. The agent is engaged in path integration, terrain discrimination and gait adaptation, and moving target following tasks. The nature of the tasks forces the robot to leave the ``here-and-now'' time scale of simple reactive stimulus-response behaviors and to learn from its experience, thus creating a ``minimally cognitive'' setting. Solutions to these problems are developed by the agent in a bottom-up fashion. The complete scenarios are then used to illuminate the concepts that are believed to lie at the basis of cognition: sensorimotor contingencies, body schema, and forward internal models. Finally, we discuss how the presented solutions are relevant for applications in robotics, in particular in the area of autonomous model acquisition and adaptation, and, in mobile robots, in dead reckoning and traversability detection

    Robotic hand augmentation drives changes in neural body representation

    Get PDF
    Humans have long been fascinated by the opportunities afforded through augmentation. This vision not only depends on technological innovations but also critically relies on our brain's ability to learn, adapt, and interface with augmentation devices. Here, we investigated whether successful motor augmentation with an extra robotic thumb can be achieved and what its implications are on the neural representation and function of the biological hand. Able-bodied participants were trained to use an extra robotic thumb (called the Third Thumb) over 5 days, including both lab-based and unstructured daily use. We challenged participants to complete normally bimanual tasks using only the augmented hand and examined their ability to develop hand-robot interactions. Participants were tested on a variety of behavioral and brain imaging tests, designed to interrogate the augmented hand's representation before and after the training. Training improved Third Thumb motor control, dexterity, and hand-robot coordination, even when cognitive load was increased or when vision was occluded. It also resulted in increased sense of embodiment over the Third Thumb. Consequently, augmentation influenced key aspects of hand representation and motor control. Third Thumb usage weakened natural kinematic synergies of the biological hand. Furthermore, brain decoding revealed a mild collapse of the augmented hand's motor representation after training, even while the Third Thumb was not worn. Together, our findings demonstrate that motor augmentation can be readily achieved, with potential for flexible use, reduced cognitive reliance, and increased sense of embodiment. Yet, augmentation may incur changes to the biological hand representation. Such neurocognitive consequences are crucial for successful implementation of future augmentation technologies

    Peripersonal Space in the Humanoid Robot iCub

    Get PDF
    Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.CONACyT, Mexico (National Council of Science and Technology

    Multimodal human hand motion sensing and analysis - a review

    Get PDF
    corecore