9,532 research outputs found

    Design of a Realistic Robotic Head based on Action Coding System

    Get PDF
    Producción CientíficaIn this paper, the development of a robotic head able to move and show di erent emotions is addressed. The movement and emotion generation system has been designed following the human facial muscu- lature. Starting from the Facial Action Coding System (FACS), we have built a 26 actions units model that is able to produce the most relevant movements and emotions of a real human head. The whole work has been carried out in two steps. In the rst step, a mechanical skeleton has been designed and built, in which the di erent actuators have been inserted. In the second step, a two-layered silicon skin has been manu- factured, on which the di erent actuators have been inserted following the real muscle-insertions, for performing the di erent movements and gestures. The developed head has been integrated in a high level be- havioural architecture, and pilot experiments with 10 users regarding emotion recognition and mimicking have been carried out.Junta de Castilla y León (Programa de apoyo a proyectos de investigación-Ref. VA036U14)Junta de Castilla y León (programa de apoyo a proyectos de investigación - Ref. VA013A12-2)Ministerio de Economía, Industria y Competitividad (Grant DPI2014-56500-R

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Muecas: a multi-sensor robotic head for affective human robot interaction and imitation

    Get PDF
    Este artículo presenta una cabeza robótica humanoide multi-sensor para la interacción del robot humano. El diseño de la cabeza robótica, Muecas, se basa en la investigación en curso sobre los mecanismos de percepción e imitación de las expresiones y emociones humanas. Estos mecanismos permiten la interacción directa entre el robot y su compañero humano a través de las diferentes modalidades del lenguaje natural: habla, lenguaje corporal y expresiones faciales. La cabeza robótica tiene 12 grados de libertad, en una configuración de tipo humano, incluyendo ojos, cejas, boca y cuello, y ha sido diseñada y construida totalmente por IADeX (Ingeniería, Automatización y Diseño de Extremadura) y RoboLab. Se proporciona una descripción detallada de su cinemática junto con el diseño de los controladores más complejos. Muecas puede ser controlado directamente por FACS (Sistema de Codificación de Acción Facial), el estándar de facto para reconocimiento y síntesis de expresión facial. Esta característica facilita su uso por parte de plataformas de terceros y fomenta el desarrollo de la imitación y de los sistemas basados en objetivos. Los sistemas de imitación aprenden del usuario, mientras que los basados en objetivos utilizan técnicas de planificación para conducir al usuario hacia un estado final deseado. Para mostrar la flexibilidad y fiabilidad de la cabeza robótica, se presenta una arquitectura de software capaz de detectar, reconocer, clasificar y generar expresiones faciales en tiempo real utilizando FACS. Este sistema se ha implementado utilizando la estructura robótica, RoboComp, que proporciona acceso independiente al hardware a los sensores en la cabeza. Finalmente, se presentan resultados experimentales que muestran el funcionamiento en tiempo real de todo el sistema, incluyendo el reconocimiento y la imitación de las expresiones faciales humanas.This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.Trabajo financiado por: Ministerio de Ciencia e Innovación. Proyecto TIN2012-38079-C03-1 Gobierno de Extremadura. Proyecto GR10144peerReviewe

    Controlling a mobile robot with a biological brain

    Get PDF
    The intelligent controlling mechanism of a typical mobile robot is usually a computer system. Some recent research is ongoing in which biological neurons are being cultured and trained to act as the brain of an interactive real world robot�thereby either completely replacing, or operating in a cooperative fashion with, a computer system. Studying such hybrid systems can provide distinct insights into the operation of biological neural structures, and therefore, such research has immediate medical implications as well as enormous potential in robotics. The main aim of the research is to assess the computational and learning capacity of dissociated cultured neuronal networks. A hybrid system incorporating closed-loop control of a mobile robot by a dissociated culture of neurons has been created. The system is flexible and allows for closed-loop operation, either with hardware robot or its software simulation. The paper provides an overview of the problem area, gives an idea of the breadth of present ongoing research, establises a new system architecture and, as an example, reports on the results of conducted experiments with real-life robots

    Experience of Robotic Exoskeleton Use at Four Spinal Cord Injury Model Systems Centers

    Get PDF
    Background and Purpose: Refinement of robotic exoskeletons for overground walking is progressing rapidly. We describe clinicians\u27 experiences, evaluations, and training strategies using robotic exoskeletons in spinal cord injury rehabilitation and wellness settings and describe clinicians\u27 perceptions of exoskeleton benefits and risks and developments that would enhance utility. Methods: We convened focus groups at 4 spinal cord injury model system centers. A court reporter took verbatim notes and provided a transcript. Research staff used a thematic coding approach to summarize discussions. Results: Thirty clinicians participated in focus groups. They reported using exoskeletons primarily in outpatient and wellness settings; 1 center used exoskeletons during inpatient rehabilitation. A typical episode of outpatient exoskeleton therapy comprises 20 to 30 sessions and at least 2 staff members are involved in each session. Treatment focuses on standing, stepping, and gait training; therapists measure progress with standardized assessments. Beyond improved gait, participants attributed physiological, psychological, and social benefits to exoskeleton use. Potential risks included falls, skin irritation, and disappointed expectations. Participants identified enhancements that would be of value including greater durability and adjustability, lighter weight, 1-hand controls, ability to navigate stairs and uneven surfaces, and ability to balance without upper extremity support. Discussion and Conclusions: Each spinal cord injury model system center had shared and distinct practices in terms of how it integrates robotic exoskeletons into physical therapy services. There is currently little evidence to guide integration of exoskeletons into rehabilitation therapy services and a pressing need to generate evidence to guide practice and to inform patients\u27 expectations as more devices enter the market. Background and Purpose: Refinement of robotic exoskeletons for overground walking is progressing rapidly. We describe clinicians\u27 experiences, evaluations, and training strategies using robotic exoskeletons in spinal cord injury rehabilitation and wellness settings and describe clinicians\u27 perceptions of exoskeleton benefits and risks and developments that would enhance utility. Methods: We convened focus groups at 4 spinal cord injury model system centers. A court reporter took verbatim notes and provided a transcript. Research staff used a thematic coding approach to summarize discussions. Results: Thirty clinicians participated in focus groups. They reported using exoskeletons primarily in outpatient and wellness settings; 1 center used exoskeletons during inpatient rehabilitation. A typical episode of outpatient exoskeleton therapy comprises 20 to 30 sessions and at least 2 staff members are involved in each session. Treatment focuses on standing, stepping, and gait training; therapists measure progress with standardized assessments. Beyond improved gait, participants attributed physiological, psychological, and social benefits to exoskeleton use. Potential risks included falls, skin irritation, and disappointed expectations. Participants identified enhancements that would be of value including greater durability and adjustability, lighter weight, 1-hand controls, ability to navigate stairs and uneven surfaces, and ability to balance without upper extremity support. Discussion and Conclusions: Each spinal cord injury model system center had shared and distinct practices in terms of how it integrates robotic exoskeletons into physical therapy services. There is currently little evidence to guide integration of exoskeletons into rehabilitation therapy services and a pressing need to generate evidence to guide practice and to inform patients\u27 expectations as more devices enter the market

    Adaptive Robotic Control Driven by a Versatile Spiking Cerebellar Network

    Get PDF
    The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.European Union (Human Brain Project) REALNET FP7-ICT270434 CEREBNET FP7-ITN238686 HBP-60410
    corecore