7,224 research outputs found

    Rhythms and Robot Relations

    Full text link

    A Computational Approach for Human-like Motion Generation in Upper Limb Exoskeletons Supporting Scapulohumeral Rhythms

    Full text link
    This paper proposes a computational approach for generation of reference path for upper-limb exoskeletons considering the scapulohumeral rhythms of the shoulder. The proposed method can be used in upper-limb exoskeletons with 3 Degrees of Freedom (DoF) in shoulder and 1 DoF in elbow, which are capable of supporting shoulder girdle. The developed computational method is based on Central Nervous System (CNS) governing rules. Existing computational reference generation methods are based on the assumption of fixed shoulder center during motions. This assumption can be considered valid for reaching movements with limited range of motion (RoM). However, most upper limb motions such as Activities of Daily Living (ADL) include large scale inward and outward reaching motions, during which the center of shoulder joint moves significantly. The proposed method generates the reference motion based on a simple model of human arm and a transformation can be used to map the developed motion for other exoskeleton with different kinematics. Comparison of the model outputs with experimental results of healthy subjects performing ADL, show that the proposed model is able to reproduce human-like motions.Comment: In 2017 IEEE International Symposium on Wearable & Rehabilitation Robotics (WeRob2017

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    EEG theta and Mu oscillations during perception of human and robot actions.

    Get PDF
    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other

    Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self

    Get PDF
    For a robot to be capable of development, it must be able to explore its environment and learn from its experiences. It must find (or create) opportunities to experience the unfamiliar in ways that reveal properties valid beyond the immediate context. In this paper, we develop a novel method for using the rhythm of everyday actions as a basis for identifying the characteristic appearance and sounds associated with objects, people, and the robot itself. Our approach is to identify and segment groups of signals in individual modalities (sight, hearing, and proprioception) based on their rhythmic variation, then to identify and bind causally-related groups of signals across different modalities. By including proprioception as a modality, this cross-modal binding method applies to the robot itself, and we report a series of experiments in which the robot learns about the characteristics of its own body

    Rehabilitative devices for a top-down approach

    Get PDF
    In recent years, neurorehabilitation has moved from a "bottom-up" to a "top down" approach. This change has also involved the technological devices developed for motor and cognitive rehabilitation. It implies that during a task or during therapeutic exercises, new "top-down" approaches are being used to stimulate the brain in a more direct way to elicit plasticity-mediated motor re-learning. This is opposed to "Bottom up" approaches, which act at the physical level and attempt to bring about changes at the level of the central neural system. Areas covered: In the present unsystematic review, we present the most promising innovative technological devices that can effectively support rehabilitation based on a top-down approach, according to the most recent neuroscientific and neurocognitive findings. In particular, we explore if and how the use of new technological devices comprising serious exergames, virtual reality, robots, brain computer interfaces, rhythmic music and biofeedback devices might provide a top-down based approach. Expert commentary: Motor and cognitive systems are strongly harnessed in humans and thus cannot be separated in neurorehabilitation. Recently developed technologies in motor-cognitive rehabilitation might have a greater positive effect than conventional therapies

    A motivational model based on artificial biological functions for the intelligent decision-making of social robots

    Get PDF
    Modelling the biology behind animal behaviour has attracted great interest in recent years. Nevertheless, neuroscience and artificial intelligence face the challenge of representing and emulating animal behaviour in robots. Consequently, this paper presents a biologically inspired motivational model to control the biological functions of autonomous robots that interact with and emulate human behaviour. The model is intended to produce fully autonomous, natural, and behaviour that can adapt to both familiar and unexpected situations in human–robot interactions. The primary contribution of this paper is to present novel methods for modelling the robot’s internal state to generate deliberative and reactive behaviour, how it perceives and evaluates the stimuli from the environment, and the role of emotional responses. Our architecture emulates essential animal biological functions such as neuroendocrine responses, circadian and ultradian rhythms, motivation, and affection, to generate biologically inspired behaviour in social robots. Neuroendocrinal substances control biological functions such as sleep, wakefulness, and emotion. Deficits in these processes regulate the robot’s motivational and affective states, significantly influencing the robot’s decision-making and, therefore, its behaviour. We evaluated the model by observing the long-term behaviour of the social robot Mini while interacting with people. The experiment assessed how the robot’s behaviour varied and evolved depending on its internal variables and external situations, adapting to different conditions. The outcomes show that an autonomous robot with appropriate decision-making can cope with its internal deficits and unexpected situations, controlling its sleep–wake cycle, social behaviour, affective states, and stress, when acting in human–robot interactions.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, Innovación y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spanish Ministerio de Ciencia e Innovación. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions
    corecore