7,672 research outputs found

    Neuro-electronic technology in medicine and beyond

    Get PDF
    This dissertation looks at the technology and social issues involved with interfacing electronics directly to the human nervous system, in particular the methods for both reading and stimulating nerves. The development and use of cochlea implants is discussed, and is compared with recent developments in artificial vision. The final sections consider a future for non-medicinal applications of neuro-electronic technology. Social attitudes towards use for both medicinal and non-medicinal purposes are discussed, and the viability of use in the latter case assessed

    Immersive Composition for Sensory Rehabilitation: 3D Visualisation, Surround Sound, and Synthesised Music to Provoke Catharsis and Healing

    Get PDF
    There is a wide range of sensory therapies using sound, music and visual stimuli. Some focus on soothing or distracting stimuli such as natural sounds or classical music as analgesic, while other approaches emphasize the active performance of producing music as therapy. This paper proposes an immersive multi-sensory Exposure Therapy for people suffering from anxiety disorders, based on a rich, detailed surround-soundscape. This soundscape is composed to include the users’ own idiosyncratic anxiety triggers as a form of habituation, and to provoke psychological catharsis, as a non-verbal, visceral and enveloping exposure. To accurately pinpoint the most effective sounds and to optimally compose the soundscape we will monitor the participants’ physiological responses such as electroencephalography, respiration, electromyography, and heart rate during exposure. We hypothesize that such physiologically optimized sensory landscapes will aid the development of future immersive therapies for various psychological conditions, Sound is a major trigger of anxiety, and auditory hypersensitivity is an extremely problematic symptom. Exposure to stress-inducing sounds can free anxiety sufferers from entrenched avoidance behaviors, teaching physiological coping strategies and encouraging resolution of the psychological issues agitated by the sound

    Frontal brain asymmetries as effective parameters to assess the quality of audiovisual stimuli perception in adult and young cochlear implant users

    Get PDF
    How is music perceived by cochlear implant (CI) users? This question arises as "the next step" given the impressive performance obtained by these patients in language perception. Furthermore, how can music perception be evaluated beyond self-report rating, in order to obtain measurable data? To address this question, estimation of the frontal electroencephalographic (EEG) alpha activity imbalance, acquired through a 19-channel EEG cap, appears to be a suitable instrument to measure the approach/withdrawal (AW index) reaction to external stimuli. Specifically, a greater value of AW indicates an increased propensity to stimulus approach, and vice versa a lower one a tendency to withdraw from the stimulus. Additionally, due to prelingually and postlingually deafened pathology acquisition, children and adults, respectively, would probably differ in music perception. The aim of the present study was to investigate children and adult CI users, in unilateral (UCI) and bilateral (BCI) implantation conditions, during three experimental situations of music exposure (normal, distorted and mute). Additionally, a study of functional connectivity patterns within cerebral networks was performed to investigate functioning patterns in different experimental populations. As a general result, congruency among patterns between BCI patients and control (CTRL) subjects was seen, characterised by lowest values for the distorted condition (vs. normal and mute conditions) in the AW index and in the connectivity analysis. Additionally, the normal and distorted conditions were significantly different in CI and CTRL adults, and in CTRL children, but not in CI children. These results suggest a higher capacity of discrimination and approach motivation towards normal music in CTRL and BCI subjects, but not for UCI patients. Therefore, for perception of music CTRL and BCI participants appear more similar than UCI subjects, as estimated by measurable and not self-reported parameters

    Standardization of electroencephalography for multi-site, multi-platform and multi-investigator studies: Insights from the canadian biomarker integration network in depression

    Get PDF
    Subsequent to global initiatives in mapping the human brain and investigations of neurobiological markers for brain disorders, the number of multi-site studies involving the collection and sharing of large volumes of brain data, including electroencephalography (EEG), has been increasing. Among the complexities of conducting multi-site studies and increasing the shelf life of biological data beyond the original study are timely standardization and documentation of relevant study parameters. We presentthe insights gained and guidelines established within the EEG working group of the Canadian Biomarker Integration Network in Depression (CAN-BIND). CAN-BIND is a multi-site, multi-investigator, and multiproject network supported by the Ontario Brain Institute with access to Brain-CODE, an informatics platform that hosts a multitude of biological data across a growing list of brain pathologies. We describe our approaches and insights on documenting and standardizing parameters across the study design, data collection, monitoring, analysis, integration, knowledge-translation, and data archiving phases of CAN-BIND projects. We introduce a custom-built EEG toolbox to track data preprocessing with open-access for the scientific community. We also evaluate the impact of variation in equipment setup on the accuracy of acquired data. Collectively, this work is intended to inspire establishing comprehensive and standardized guidelines for multi-site studies

    Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information

    Get PDF
    Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility

    Rehabilitative devices for a top-down approach

    Get PDF
    In recent years, neurorehabilitation has moved from a "bottom-up" to a "top down" approach. This change has also involved the technological devices developed for motor and cognitive rehabilitation. It implies that during a task or during therapeutic exercises, new "top-down" approaches are being used to stimulate the brain in a more direct way to elicit plasticity-mediated motor re-learning. This is opposed to "Bottom up" approaches, which act at the physical level and attempt to bring about changes at the level of the central neural system. Areas covered: In the present unsystematic review, we present the most promising innovative technological devices that can effectively support rehabilitation based on a top-down approach, according to the most recent neuroscientific and neurocognitive findings. In particular, we explore if and how the use of new technological devices comprising serious exergames, virtual reality, robots, brain computer interfaces, rhythmic music and biofeedback devices might provide a top-down based approach. Expert commentary: Motor and cognitive systems are strongly harnessed in humans and thus cannot be separated in neurorehabilitation. Recently developed technologies in motor-cognitive rehabilitation might have a greater positive effect than conventional therapies

    Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices

    Get PDF
    The brain should integrate related but not unrelated information from different senses. Temporal patterning of inputs to different modalities may provide critical information about whether those inputs are related or not. We studied effects of temporal correspondence between auditory and visual streams on human brain activity with functional magnetic resonance imaging ( fMRI). Streams of visual flashes with irregularly jittered, arrhythmic timing could appear on right or left, with or without a stream of auditory tones that coincided perfectly when present ( highly unlikely by chance), were noncoincident with vision ( different erratic, arrhythmic pattern with same temporal statistics), or an auditory stream appeared alone. fMRI revealed blood oxygenation level-dependent ( BOLD) increases in multisensory superior temporal sulcus (mSTS), contralateral to a visual stream when coincident with an auditory stream, and BOLD decreases for noncoincidence relative to unisensory baselines. Contralateral primary visual cortex and auditory cortex were also affected by audiovisual temporal correspondence or noncorrespondence, as confirmed in individuals. Connectivity analyses indicated enhanced influence from mSTS on primary sensory areas, rather than vice versa, during audiovisual correspondence. Temporal correspondence between auditory and visual streams affects a network of both multisensory ( mSTS) and sensory-specific areas in humans, including even primary visual and auditory cortex, with stronger responses for corresponding and thus related audiovisual inputs
    corecore