1,184 research outputs found
Seeing with sound? Exploring different characteristics of a visual-to-auditory sensory substitution device
Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device (‘The vOICe’) was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner’s light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations
Detecting number processing and mental calculation in patients with disorders of consciousness using a hybrid brain-computer interface system
Background: For patients with disorders of consciousness such as coma, a vegetative state or a minimally conscious state, one challenge is to detect and assess the residual cognitive functions in their brains. Number processing and mental calculation are important brain functions but are difficult to detect in patients with disorders of consciousness using motor response-based clinical assessment scales such as the Coma Recovery Scale-Revised due to the patients' motor impairments and inability to provide sufficient motor responses for number- and calculation-based communication. Methods: In this study, we presented a hybrid brain-computer interface that combines P300 and steady state visual evoked potentials to detect number processing and mental calculation in Han Chinese patients with disorders of consciousness. Eleven patients with disorders of consciousness who were in a vegetative state (n = 6) or in a minimally conscious state (n = 3) or who emerged from a minimally conscious state (n = 2) participated in the brain-computer interface-based experiment. During the experiment, the patients with disorders of consciousness were instructed to perform three tasks, i.e., number recognition, number comparison, and mental calculation, including addition and subtraction. In each experimental trial, an arithmetic problem was first presented. Next, two number buttons, only one of which was the correct answer to the problem, flickered at different frequencies to evoke steady state visual evoked potentials, while the frames of the two buttons flashed in a random order to evoke P300 potentials. The patients needed to focus on the target number button (the correct answer). Finally, the brain-computer interface system detected P300 and steady state visual evoked potentials to determine the button to which the patients attended, further presenting the results as feedback. Results: Two of the six patients who were in a vegetative state, one of the three patients who were in a minimally conscious state, and the two patients that emerged from a minimally conscious state achieved accuracies significantly greater than the chance level. Furthermore, P300 potentials and steady state visual evoked potentials were observed in the electroencephalography signals from the five patients. Conclusions: Number processing and arithmetic abilities as well as command following were demonstrated in the five patients. Furthermore, our results suggested that through brain-computer interface systems, many cognitive experiments may be conducted in patients with disorders of consciousness, although they cannot provide sufficient behavioral responses. © 2015 Li et al
Motor priming in virtual reality can augment motor-imagery training efficacy in restorative brain-computer interaction: a within-subject analysis
The use of Brain-Computer Interface (BCI) technology in neurorehabilitation provides new strategies to overcome stroke-related motor limitations. Recent studies demonstrated the brain's capacity for functional and structural plasticity through BCI. However, it is not fully clear how we can take full advantage of the neurobiological mechanisms underlying recovery and how to maximize restoration through BCI. In this study we investigate the role of multimodal virtual reality (VR) simulations and motor priming (MP) in an upper limb motor-imagery BCI task in order to maximize the engagement of sensory-motor networks in a broad range of patients who can benefit from virtual rehabilitation training.info:eu-repo/semantics/publishedVersio
Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations
Fully automated decoding of human activities and intentions from direct
neural recordings is a tantalizing challenge in brain-computer interfacing.
Most ongoing efforts have focused on training decoders on specific, stereotyped
tasks in laboratory settings. Implementing brain-computer interfaces (BCIs) in
natural settings requires adaptive strategies and scalable algorithms that
require minimal supervision. Here we propose an unsupervised approach to
decoding neural states from human brain recordings acquired in a naturalistic
context. We demonstrate our approach on continuous long-term
electrocorticographic (ECoG) data recorded over many days from the brain
surface of subjects in a hospital room, with simultaneous audio and video
recordings. We first discovered clusters in high-dimensional ECoG recordings
and then annotated coherent clusters using speech and movement labels extracted
automatically from audio and video recordings. To our knowledge, this
represents the first time techniques from computer vision and speech processing
have been used for natural ECoG decoding. Our results show that our
unsupervised approach can discover distinct behaviors from ECoG data, including
moving, speaking and resting. We verify the accuracy of our approach by
comparing to manual annotations. Projecting the discovered cluster centers back
onto the brain, this technique opens the door to automated functional brain
mapping in natural settings
- …