3,433 research outputs found

    Exploiting Multiple Sensory Modalities in Brain-Machine Interfaces

    Get PDF
    Recent improvements in cortically-controlled brain-machine interfaces (BMIs) have raised hopes that such technologies may improve the quality of life of severely motor-disabled patients. However, current generation BMIs do not perform up to their potential due to the neglect of the full range of sensory feedback in their strategies for training and control. Here we confirm that neurons in the primary motor cortex (MI) encode sensory information and demonstrate a significant heterogeneity in their responses with respect to the type of sensory modality available to the subject about a reaching task. We further show using mutual information and directional tuning analyses that the presence of multi-sensory feedback (i.e. vision and proprioception) during replay of movements evokes neural responses in MI that are almost indistinguishable from those responses measured during overt movement. Finally, we suggest how these playback-evoked responses may be used to improve BMI performance

    Sensing with the Motor Cortex

    Get PDF
    The primary motor cortex is a critical node in the network of brain regions responsible for voluntary motor behavior. It has been less appreciated, however, that the motor cortex exhibits sensory responses in a variety of modalities including vision and somatosensation. We review current work that emphasizes the heterogeneity in sensorimotor responses in the motor cortex and focus on its implications for cortical control of movement as well as for brain-machine interface development

    Multimodal (multisensory) integration, in technology

    No full text
    International audienceThe concept of modality has two sides, depending on the domain in which it is defined (typically cognitive sciences and human computer interaction). The current item, though written in the framework of technology and system design, uses mainly the meaning of cognitive sciences: a modality is understood as a perceptual modality, and multimodality is understood as multisensory

    Incorporating Feedback from Multiple Sensory Modalities Enhances Brain–Machine Interface Control

    Get PDF
    The brain typically uses a rich supply of feedback from multiple sensory modalities to control movement in healthy individuals. In many individuals, these afferent pathways, as well as their efferent counterparts, are compromised by disease or injury resulting in significant impairments and reduced quality of life. Brain–machine interfaces (BMIs) offer the promise of recovered functionality to these individuals by allowing them to control a device using their thoughts. Most current BMI implementations use visual feedback for closed-loop control; however, it has been suggested that the inclusion of additional feedback modalities may lead to improvements in control. We demonstrate for the first time that kinesthetic feedback can be used together with vision to significantly improve control of a cursor driven by neural activity of the primary motor cortex (MI). Using an exoskeletal robot, the monkey\u27s arm was moved to passively follow a cortically controlled visual cursor, thereby providing the monkey with kinesthetic information about the motion of the cursor. When visual and proprioceptive feedback were congruent, both the time to successfully reach a target decreased and the cursor paths became straighter, compared with incongruent feedback conditions. This enhanced performance was accompanied by a significant increase in the amount of movement-related information contained in the spiking activity of neurons in MI. These findings suggest that BMI control can be significantly improved in paralyzed patients with residual kinesthetic sense and provide the groundwork for augmenting cortically controlled BMIs with multiple forms of natural or surrogate sensory feedback

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Artificial proprioceptive feedback for myoelectric control

    Get PDF
    The typical control of myoelectric interfaces, whether in laboratory settings or real-life prosthetic applications, largely relies on visual feedback because proprioceptive signals from the controlling muscles are either not available or very noisy. We conducted a set of experiments to test whether artificial proprioceptive feedback, delivered non-invasively to another limb, can improve control of a two-dimensional myoelectrically-controlled computer interface. In these experiments, participants’ were required to reach a target with a visual cursor that was controlled by electromyogram signals recorded from muscles of the left hand, while they were provided with an additional proprioceptive feedback on their right arm by moving it with a robotic manipulandum. Provision of additional artificial proprioceptive feedback improved the angular accuracy of their movements when compared to using visual feedback alone but did not increase the overall accuracy quantified with the average distance between the cursor and the target. The advantages conferred by proprioception were present only when the proprioceptive feedback had similar orientation to the visual feedback in the task space and not when it was mirrored, demonstrating the importance of congruency in feedback modalities for multi-sensory integration. Our results reveal the ability of the human motor system to learn new inter-limb sensory-motor associations; the motor system can utilize task-related sensory feedback, even when it is available on a limb distinct from the one being actuated. In addition, the proposed task structure provides a flexible test paradigm by which the effectiveness of various sensory feedback and multi-sensory integration for myoelectric prosthesis control can be evaluated

    Discovering Gender Differences in Facial Emotion Recognition via Implicit Behavioral Cues

    Full text link
    We examine the utility of implicit behavioral cues in the form of EEG brain signals and eye movements for gender recognition (GR) and emotion recognition (ER). Specifically, the examined cues are acquired via low-cost, off-the-shelf sensors. We asked 28 viewers (14 female) to recognize emotions from unoccluded (no mask) as well as partially occluded (eye and mouth masked) emotive faces. Obtained experimental results reveal that (a) reliable GR and ER is achievable with EEG and eye features, (b) differential cognitive processing especially for negative emotions is observed for males and females and (c) some of these cognitive differences manifest under partial face occlusion, as typified by the eye and mouth mask conditions.Comment: To be published in the Proceedings of Seventh International Conference on Affective Computing and Intelligent Interaction.201

    Distributed Sensing and Stimulation Systems Towards Sense of Touch Restoration in Prosthetics

    Get PDF
    Modern prostheses aim at restoring the functional and aesthetic characteristics of the lost limb. To foster prosthesis embodiment and functionality, it is necessary to restitute both volitional control and sensory feedback. Contemporary feedback interfaces presented in research use few sensors and stimulation units to feedback at most two discrete feedback variables (e.g. grasping force and aperture), whereas the human sense of touch relies on a distributed network of mechanoreceptors providing high-fidelity spatial information. To provide this type of feedback in prosthetics, it is necessary to sense tactile information from artificial skin placed on the prosthesis and transmit tactile feedback above the amputation in order to map the interaction between the prosthesis and the environment. This thesis proposes the integration of distributed sensing systems (e-skin) to acquire tactile sensation, and non-invasive multichannel electrotactile feedback and virtual reality to deliver high-bandwidth information to the user. Its core focus addresses the development and testing of close-loop sensory feedback human-machine interface, based on the latest distributed sensing and stimulation techniques for restoring the sense of touch in prosthetics. To this end, the thesis is comprised of two introductory chapters that describe the state of art in the field, the objectives and the used methodology and contributions; as well as three studies distributed over stimulation system level and sensing system level. The first study presents the development of close-loop compensatory tracking system to evaluate the usability and effectiveness of electrotactile sensory feedback in enabling real-time close-loop control in prosthetics. It examines and compares the subject\u2019s adaptive performance and tolerance to random latencies while performing the dynamic control task (i.e. position control) and simultaneously receiving either visual feedback or electrotactile feedback for communicating the momentary tracking error. Moreover, it reported the minimum time delay needed for an abrupt impairment of users\u2019 performance. The experimental results have shown that electrotactile feedback performance is less prone to changes with longer delays. However, visual feedback drops faster than electrotactile with increased time delays. This is a good indication for the effectiveness of electrotactile feedback in enabling close- loop control in prosthetics, since some delays are inevitable. The second study describes the development of a novel non-invasive compact multichannel interface for electrotactile feedback, containing 24 pads electrode matrix, with fully programmable stimulation unit, that investigates the ability of able-bodied human subjects to localize the electrotactile stimulus delivered through the electrode matrix. Furthermore, it designed a novel dual parameter -modulation (interleaved frequency and intensity) and compared it to conventional stimulation (same frequency for all pads). In addition and for the first time, it compared the electrotactile stimulation to mechanical stimulation. More, it exposes the integration of virtual prosthesis with the developed system in order to achieve better user experience and object manipulation through mapping the acquired real-time collected tactile data and feedback it simultaneously to the user. The experimental results demonstrated that the proposed interleaved coding substantially improved the spatial localization compared to same-frequency stimulation. Furthermore, it showed that same-frequency stimulation was equivalent to mechanical stimulation, whereas the performance with dual-parameter modulation was significantly better. The third study presents the realization of a novel, flexible, screen- printed e-skin based on P(VDF-TrFE) piezoelectric polymers, that would cover the fingertips and the palm of the prosthetic hand (particularly the Michelangelo hand by Ottobock) and an assistive sensorized glove for stroke patients. Moreover, it developed a new validation methodology to examine the sensors behavior while being solicited. The characterization results showed compatibility between the expected (modeled) behavior of the electrical response of each sensor to measured mechanical (normal) force at the skin surface, which in turn proved the combination of both fabrication and assembly processes was successful. This paves the way to define a practical, simplified and reproducible characterization protocol for e-skin patches In conclusion, by adopting innovative methodologies in sensing and stimulation systems, this thesis advances the overall development of close-loop sensory feedback human-machine interface used for restoration of sense of touch in prosthetics. Moreover, this research could lead to high-bandwidth high-fidelity transmission of tactile information for modern dexterous prostheses that could ameliorate the end user experience and facilitate it acceptance in the daily life

    Multisensory Perception and Learning: Linking Pedagogy, Psychophysics, and Human–Computer Interaction

    Get PDF
    In this review, we discuss how specific sensory channels can mediate the learning of properties of the environment. In recent years, schools have increasingly been using multisensory technology for teaching. However, it still needs to be sufficiently grounded in neuroscientific and pedagogical evidence. Researchers have recently renewed understanding around the role of communication between sensory modalities during development. In the current review, we outline four principles that will aid technological development based on theoretical models of multisensory development and embodiment to foster in-depth, perceptual, and conceptual learning of mathematics. We also discuss how a multidisciplinary approach offers a unique contribution to development of new practical solutions for learning in school. Scientists, engineers, and pedagogical experts offer their interdisciplinary points of view on this topic. At the end of the review, we present our results, showing that one can use multiple sensory inputs and sensorimotor associations in multisensory technology to improve the discrimination of angles, but also possibly for educational purposes. Finally, we present an application, the ‘RobotAngle’ developed for primary (i.e., elementary) school children, which uses sounds and body movements to learn about angles
    • 

    corecore