60 research outputs found

    Changes in primary visual and auditory cortex of blind and sighted adults following 10 weeks of click-based echolocation training

    Get PDF
    Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural “blueprints” may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre–post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required

    Human echolocation:waveform analysis of tongue clicks

    Get PDF
    Some blind individuals have the ability to detect and classify objects in complex scenes by using echolocation based on ‘tongue clicks’. A waveform analysis of the tongue clicks collected from three blind individuals is presented who use tongue-click based echolocation on a daily basis. It is found that the tongue clicks are wideband signals and that that the spectrum of clicks varies within and between individuals. However, by using the wideband ambiguity function, it is found that all of the clicks from three different individuals share some common characteristics

    Human echolocators adjust loudness and number of clicks for detection of reflectors at various azimuth angles

    Get PDF
    In bats it has been shown that they adjust their emissions to situational demands. Here we report similar findings for human echolocation. We asked eight blind expert echolocators to detect reflectors positioned at various azimuth angles. The same 17.5 cm diameter circular reflector placed at 100 cm distance at 0°, 45° or 90° with respect to straight ahead was detected with 100% accuracy, but performance dropped to approximately 80% when it was placed at 135° (i.e. somewhat behind) and to chance levels (50%) when placed at 180° (i.e. right behind). This can be explained based on poorer target ensonification owing to the beampattern ofhumanmouth clicks. Importantly, analyses of sound recordings showthat echolocators increased loudness and numbers of clicks for reflectors at farther angles. Echolocatorswere able to reliably detect reflectors when level differences between echo and emission were as lowas 227 dB, which is much lower than expected based on previous work. Increasing intensity and numbers of clicks improves signal-to-noise ratio and in this way compensates for weaker target reflections. Our results are, to our knowledge, the first to show that human echolocation experts adjust their emissions to improve sensorysampling.Animplication fromour findings is that human echolocators accumulate information from multiple samples

    Bio-inspired radar: recognition of human echolocator tongue clicks signals

    Get PDF
    Echolocation is a process where sound waves are transmitted and the echoes are analyzed to determine information about the surrounding environment. Principle of echolocation method inspire by bat have been widely used in Radar and Sonar application. What is less known, this technique also used by a small group of blind humans in their daily life mainly for navigation and object recognition with high accuracy. To date, only a few technical studies look at how these echolocators are able to detect their own echoes. The conventional detection using match filter like in Radar application for this signal is not suitable due to existence of multiple frequency components. Thus, this paper discusses an alternative approach to recognize human echolocator tongue click signals by using the Linde-Buzo-Gray Vector Quantization Method. The significant click features which is the multiple frequencies itself were extracted from the raw transmits and echo signal and were used for the recognition process. Although there are gaps still need to be filled, the biologically-inspired technique presented here may provide useful information particular in signal processing for radar and sonar systems in the future

    Mouth-Clicks used by Blind Expert Human Echolocators – Signal Description and Model Based Signal Synthesis

    Get PDF
    Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour
    corecore