26 research outputs found

    Mouth-Clicks used by Blind Expert Human Echolocators – Signal Description and Model Based Signal Synthesis

    Get PDF
    Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour

    Increased emission intensity can compensate for the presence of noise in human click-based echolocation

    Get PDF
    Echolocating bats adapt their emissions to succeed in noisy environments. In the present study we investigated if echolocating humans can detect a sound-refecting surface in the presence of noise and if intensity of echolocation emissions (i.e. clicks) changes in a systematic pattern. We tested people who were blind and had experience in echolocation, as well as blind and sighted people who had no experience in echolocation prior to the study. We used an echo-detection paradigm where participants listened to binaural recordings of echolocation sounds (i.e. they did not make their own click emissions), and where intensity of emissions and echoes changed adaptively based on participant performance (intensity of echoes was yoked to intensity of emissions). We found that emission intensity had to systematically increase to compensate for weaker echoes relative to background noise. In fact, emission intensity increased so that spectral power of echoes exceeded spectral power of noise by 12 dB in 4-kHz and 5-kHz frequency bands. The efects were the same across all participant groups, suggesting that this efect occurs independently of long-time experience with echolocation. Our fndings demonstrate for the frst time that people can echolocate in the presence of noise and suggest that one potential strategy to deal with noise is to increase emission intensity to maintain signal-to-noise ratio of certain spectral components of the echoes

    Effects of type of emission and masking sound, and their spatial correspondence, on blind and sighted people’s ability to echolocate

    Get PDF
    Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants’ responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted

    Localization of a Virtual Wall by Means of Active Echolocation by Untrained Sighted Persons

    Get PDF
    The active sensing and perception of the environment by auditory means is typically known as echolocation and it can be acquired by humans, who can profit from it in the absence of vision. We investigated the ability of twentyone untrained sighted participants to use echolocation with self-generated oral clicks for aligning themselves within the horizontal plane towards a virtual wall, emulated with an acoustic virtual reality system, at distances between 1 and 32 m, in the absence of background noise and reverberation. Participants were able to detect the virtual wall on 61% of the trials, although with large di↵erences across individuals and distances. The use of louder and shorter clicks led to an increased performance, whereas the use of clicks with lower frequency content allowed for the use of interaural time di↵erences to improve the accuracy of reflection localization at very long distances. The distance of 2 m was the most difficult to detect and localize, whereas the furthest distances of 16 and 32 m were the easiest ones. Thus, echolocation may be used e↵ectively to identify large distant environmental landmarks such as buildings

    Human echolocators adjust loudness and number of clicks for detection of reflectors at various azimuth angles

    Get PDF
    In bats it has been shown that they adjust their emissions to situational demands. Here we report similar findings for human echolocation. We asked eight blind expert echolocators to detect reflectors positioned at various azimuth angles. The same 17.5 cm diameter circular reflector placed at 100 cm distance at 0°, 45° or 90° with respect to straight ahead was detected with 100% accuracy, but performance dropped to approximately 80% when it was placed at 135° (i.e. somewhat behind) and to chance levels (50%) when placed at 180° (i.e. right behind). This can be explained based on poorer target ensonification owing to the beampattern ofhumanmouth clicks. Importantly, analyses of sound recordings showthat echolocators increased loudness and numbers of clicks for reflectors at farther angles. Echolocatorswere able to reliably detect reflectors when level differences between echo and emission were as lowas 227 dB, which is much lower than expected based on previous work. Increasing intensity and numbers of clicks improves signal-to-noise ratio and in this way compensates for weaker target reflections. Our results are, to our knowledge, the first to show that human echolocation experts adjust their emissions to improve sensorysampling.Animplication fromour findings is that human echolocators accumulate information from multiple samples

    Biologically-inspired radar sensing

    Get PDF
    The natural world has an unquantifiable complexity and natural life exhibits remarkable techniques for responding to and interacting with the natural world. This thesis aims to find new approaches to radar systems by exploring the paradigm of biologically-inspired design to find effective ways of using the flexibility of modern radar systems. In particular, this thesis takes inspiration from the astonishing feats of human echolocators and the complex cognitive processes that underpin the human experience. Interdisciplinary research into human echolocator tongue clicks is presented before two biologically-inspired radar techniques are proposed, developed, and analyzed using simulations and experiments. The first radar technique uses the frequency-diversity of a radar system to localize targets in angle, and the second technique uses the degrees-of-freedom accessible to a mobile robotic platform to implement a cognitive radar architecture for obstacle avoidance and navigation

    Biologically-inspired wideband target localisation

    Get PDF

    Independence Through Unseen Architecture; Investigating Multi-Sensory Design for the Visually Impaired and Blind

    Get PDF
    Sensory abilities are what shape our consciousness of the surrounding environment and our concept of the world. Architecture has been based upon aesthetics, function, and form over periods of time to appeal and interact with the senses of its users. What if one or more of your senses cannot utilize space as it was intended? A primary sense in relation to architecture is sight. Which then leads the question, how can architects design insightful architecture for those who can not see it? According to the World Health Organization, there are “285 million people globally with visual impairments, of whom 39 million are completely blind”. (WHO Releases New Global Estimates on Visual Impairments, 2012) This thesis targets the disabilities of vision to illuminate the architectural discrimination that has occurred through the underrepresentation of human perception and bias of visual aspects in architecture. It is architects’ responsibility to ensure design does not exclude any type of individual from utilizing space through multi-sensorial life enhancing architecture.Architecture that is life enhancing is established in three methods; by being informative, experiential, and promoting independence, equality, and growth for all types of individuals. Inclusive architecture can be achieved through seven key design parameters to create beneficial multisensorial design. These parameters consist of lighting, color and contrast, olfactory, haptics, acoustics, materiality, and spatial circulation. These parameters are beneficial to all and can eliminate the segregation that has occurred in the built environment for the visually impaired.This thesis will interact with the senses through these seven key design parameters and implement them through architectural applications that can be utilized in any building typology to enhance the built environment for those with visual impairments. This thesis will act as a barrier-free precedent and prescriptive guide to design for the visually impaired. It proposes a call to action for architecture to recenter its focus on multi-sensory design and remind those of the importance of making the invisible visible to all
    corecore