859 research outputs found

    Multimodal Man-machine Interface and Virtual Reality for Assistive Medical Systems

    Get PDF
    The results of research the intelligence multimodal man-machine interface and virtual reality means for assistive medical systems including computers and mechatronic systems (robots) are discussed. The gesture translation for disability peoples, the learning-by-showing technology and virtual operating room with 3D visualization are presented in this report and were announced at International exhibition "Intelligent and Adaptive Robots–2005"

    Design of a Realistic Robotic Head based on Action Coding System

    Get PDF
    Producción CientíficaIn this paper, the development of a robotic head able to move and show di erent emotions is addressed. The movement and emotion generation system has been designed following the human facial muscu- lature. Starting from the Facial Action Coding System (FACS), we have built a 26 actions units model that is able to produce the most relevant movements and emotions of a real human head. The whole work has been carried out in two steps. In the rst step, a mechanical skeleton has been designed and built, in which the di erent actuators have been inserted. In the second step, a two-layered silicon skin has been manu- factured, on which the di erent actuators have been inserted following the real muscle-insertions, for performing the di erent movements and gestures. The developed head has been integrated in a high level be- havioural architecture, and pilot experiments with 10 users regarding emotion recognition and mimicking have been carried out.Junta de Castilla y León (Programa de apoyo a proyectos de investigación-Ref. VA036U14)Junta de Castilla y León (programa de apoyo a proyectos de investigación - Ref. VA013A12-2)Ministerio de Economía, Industria y Competitividad (Grant DPI2014-56500-R

    Inter-hemispheric EEG coherence analysis in Parkinson's disease : Assessing brain activity during emotion processing

    Get PDF
    Parkinson’s disease (PD) is not only characterized by its prominent motor symptoms but also associated with disturbances in cognitive and emotional functioning. The objective of the present study was to investigate the influence of emotion processing on inter-hemispheric electroencephalography (EEG) coherence in PD. Multimodal emotional stimuli (happiness, sadness, fear, anger, surprise, and disgust) were presented to 20 PD patients and 30 age-, education level-, and gender-matched healthy controls (HC) while EEG was recorded. Inter-hemispheric coherence was computed from seven homologous EEG electrode pairs (AF3–AF4, F7–F8, F3–F4, FC5–FC6, T7–T8, P7–P8, and O1–O2) for delta, theta, alpha, beta, and gamma frequency bands. In addition, subjective ratings were obtained for a representative of emotional stimuli. Interhemispherically, PD patients showed significantly lower coherence in theta, alpha, beta, and gamma frequency bands than HC during emotion processing. No significant changes were found in the delta frequency band coherence. We also found that PD patients were more impaired in recognizing negative emotions (sadness, fear, anger, and disgust) than relatively positive emotions (happiness and surprise). Behaviorally, PD patients did not show impairment in emotion recognition as measured by subjective ratings. These findings suggest that PD patients may have an impairment of inter-hemispheric functional connectivity (i.e., a decline in cortical connectivity) during emotion processing. This study may increase the awareness of EEG emotional response studies in clinical practice to uncover potential neurophysiologic abnormalities

    MVC-3D: Adaptive Design Pattern for Virtual and Augmented Reality Systems

    Get PDF
    International audienceIn this paper, we present MVC-3D design pattern to develop virtual and augmented (or mixed) reality interfaces that use new types of sensors, modalities and implement specific algorithms and simulation models. The proposed pattern represents the extension of classic MVC pattern by enriching the View component (interactive View) and adding a specific component (Library). The results obtained on the development of augmented reality interfaces showed that the complexity of M, iV and C components is reduced. The complexity increases only on the Library component (L). This helps the programmers to well structure their models even if the interface complexity increases. The proposed design pattern is also used in a design process called MVC-3D in the loop that enables a seamless evolution from initial prototype to the final system

    Design method for an anthropomorphic hand able to gesture and grasp

    Get PDF
    This paper presents a numerical method to conceive and design the kinematic model of an anthropomorphic robotic hand used for gesturing and grasping. In literature, there are few numerical methods for the finger placement of human-inspired robotic hands. In particular, there are no numerical methods, for the thumb placement, that aim to improve the hand dexterity and grasping capabilities by keeping the hand design close to the human one. While existing models are usually the result of successive parameter adjustments, the proposed method determines the fingers placements by mean of empirical tests. Moreover, a surgery test and the workspace analysis of the whole hand are used to find the best thumb position and orientation according to the hand kinematics and structure. The result is validated through simulation where it is checked that the hand looks well balanced and that it meets our constraints and needs. The presented method provides a numerical tool which allows the easy computation of finger and thumb geometries and base placements for a human-like dexterous robotic hand.Comment: IEEE International Conference on Robotics and Automation, May 2015, Seattle, United States. IEEE, 2015, Proceeding IEEE International Conference on Robotics and Automatio

    Computer- and robot-assisted Medical Intervention

    Full text link
    Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00

    Implementation of User-Independent Hand Gesture Recognition Classification Models Using IMU and EMG-based Sensor Fusion Techniques

    Get PDF
    According to the World Health Organization, stroke is the third leading cause of disability. A common consequence of stroke is hemiparesis, which leads to the impairment of one side of the body and affects the performance of activities of daily living. It has been proven that targeting the motor impairments as early as possible while using wearable mechatronic devices as a robot assisted therapy, and letting the patient be in control of the robotic system can improve the rehabilitation outcomes. However, despite the increased progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. This work presents, a user-independent gesture classification method based on a sensor fusion technique that combines surface electromyography (EMG) and an inertial measurement unit (IMU). The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform 10 types of gestures in 4 different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using 4 different classification methods. Finally, for each classification method, a 5-fold cross-validation method was used to test the efficacy of the classification algorithms. Overall classification accuracies in the range of 33.11%-72.1% were obtained. However, following the optimization of the gesture datasets, the overall classification accuracies increased to the range of 45.5%-84.5%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural human machine interface that allows better control of wearable mechatronic devices during robot assisted therapies
    • …
    corecore