20 research outputs found

    Temporal convolutional neural networks to generate a head-related impulse response from one direction to another

    Full text link
    Virtual sound synthesis is a technology that allows users to perceive spatial sound through headphones or earphones. However, accurate virtual sound requires an individual head-related transfer function (HRTF), which can be difficult to measure due to the need for a specialized environment. In this study, we proposed a method to generate HRTFs from one direction to the other. To this end, we used temporal convolutional neural networks (TCNs) to generate head-related impulse responses (HRIRs). To train the TCNs, publicly available datasets in the horizontal plane were used. Using the trained networks, we successfully generated HRIRs for directions other than the front direction in the dataset. We found that the proposed method successfully generated HRIRs for publicly available datasets. To test the generalization of the method, we measured the HRIRs of a new dataset and tested whether the trained networks could be used for this new dataset. Although the similarity evaluated by spectral distortion was slightly degraded, behavioral experiments with human participants showed that the generated HRIRs were equivalent to the measured ones. These results suggest that the proposed TCNs can be used to generate personalized HRIRs from one direction to another, which could contribute to the personalization of virtual sound

    Failure of human rhombic lip differentiation underlies medulloblastoma formation

    Get PDF
    Medulloblastoma (MB) comprises a group of heterogeneous paediatric embryonal neoplasms of the hindbrain with strong links to early development of the hindbrain 1–4. Mutations that activate Sonic hedgehog signalling lead to Sonic hedgehog MB in the upper rhombic lip (RL) granule cell lineage 5–8. By contrast, mutations that activate WNT signalling lead to WNT MB in the lower RL 9,10. However, little is known about the more commonly occurring group 4 (G4) MB, which is thought to arise in the unipolar brush cell lineage 3,4. Here we demonstrate that somatic mutations that cause G4 MB converge on the core binding factor alpha (CBFA) complex and mutually exclusive alterations that affect CBFA2T2, CBFA2T3, PRDM6, UTX and OTX2. CBFA2T2 is expressed early in the progenitor cells of the cerebellar RL subventricular zone in Homo sapiens, and G4 MB transcriptionally resembles these progenitors but are stalled in developmental time. Knockdown of OTX2 in model systems relieves this differentiation blockade, which allows MB cells to spontaneously proceed along normal developmental differentiation trajectories. The specific nature of the split human RL, which is destined to generate most of the neurons in the human brain, and its high level of susceptible EOMES +KI67 + unipolar brush cell progenitor cells probably predisposes our species to the development of G4 MB

    An auditory brain-computer interface to detect changes in sound pressure level for automatic volume control

    No full text
    Volume control is necessary to adjust sound levels for a comfortable audio or video listening experience. This study aims to develop an automatic volume control system based on a brain-computer interface (BCI). We thus focused on a BCI using an auditory oddball paradigm, and conducted two types of experiments. In the first experiment, the participant was asked to pay attention to a target sound where the sound level was high (70 dB) compared with the other sounds (60 dB). The brain activity measured by electroencephalogram showed large positive activity (P300) for the target sound, and classification of the target and nontarget sounds achieved an accuracy of 0.90. The second experiment adopted a two-target paradigm where a low sound level (50 dB) was introduced as the second target sound. P300 was also observed in the second experiment, and a value of 0.76 was obtained for the binary classification of the target and nontarget sounds. Further, we found that better accuracy was observed in large sound levels compared to small ones. These results suggest the possibility of using BCI for automatic volume control; however, it is necessary to improve its accuracy for application in daily life

    Improving the Performance of an Auditory Brain-Computer Interface Using Virtual Sound Sources by Shortening Stimulus Onset Asynchrony

    No full text
    Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA

    Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    Get PDF
    <div><p>The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.</p> </div

    Experimental setting and protocol.

    No full text
    <p>(A) Six directions for the virtual auditory stimuli. The subject looked forward (0°). (B) The trial consisted of stimulus and inter-stimulus interval. An auditory stimulus (cue) was presented during the 100 ms after stimulus onset. Inter-stimulus interval was 1000 ms. 150 trials per session were performed.</p

    EEG electrode locations.

    No full text
    <p>Spatial location of 64 channels EEG (A), 19 channels (B), and 6 channels (C). Reference electrodes attached to the ears.</p

    Accuracy in predicting the perceived direction and its directional biases.

    No full text
    <p>(A) Classification accuracy when the number of averaged trials was different. Each colored line represents the accuracy for an individual subject. The bold black line indicates the mean accuracy across subjects. (B) Classification accuracies for each direction. The data for 10-averaged trials is shown. Each line shows averaged accuracy across subjects for each direction and average for all the directions.</p
    corecore