477 research outputs found

    Listening carefully: increased perceptual acuity for species discrimination in multispecies signalling assemblages

    Get PDF
    Communication is a fundamental component of evolutionary change because of its role in mate choice and sexual selection. Acoustic signals are a vital element of animal communication and sympatric species may use private frequency bands to facilitate intraspecific communication and identification of conspecifics (acoustic communication hypothesis, ACH). If so, animals should show increasing rates of misclassification with increasing overlap in frequency between their own calls and those used by sympatric heterospecifics. We tested this on the echolocation of the horseshoe bat, Rhinolophus capensis, using a classical habituation-dishabituation experiment in which we exposed R. capensis from two phonetic populations to echolocation calls of sympatric and allopatric horseshoe bat species (Rhinolophus clivosus and Rhinolophus damarensis) and different phonetic populations of R. capensis. As predicted by the ACH, R. capensis from both test populations were able to discriminate between their own calls and calls of the respective sympatric horseshoe bat species. However, only bats from one test population were able to discriminate between calls of allopatric heterospecifics and their own population when both were using the same frequency. The local acoustic signalling assemblages (ensemble of signals from sympatric conspecifics and heterospecifics) of the two populations differed in complexity as a result of contact with other phonetic populations and sympatric heterospecifics. We therefore propose that a hierarchy of discrimination ability has evolved within the same species. Frequency alone may be sufficient to assess species membership in relatively simple acoustic assemblages but the ability to use additional acoustic cues may have evolved in more complex acoustic assemblages to circumvent misidentifications as a result of the use of overlapping signals. When the acoustic signal design is under strong constraints as a result of dual functions and the available acoustic space is limited because of co-occurring species, species discrimination is mediated through improved sensory acuity in the receiver

    Aspects of room acoustics, vision and motion in the human auditory perception of space

    Get PDF
    The human sense of hearing contributes to the awareness of where sound-generating objects are located in space and of the environment in which the hearing individual is located. This auditory perception of space interacts in complex ways with our other senses, can be both disrupted and enhanced by sound reflections, and includes safety mechanisms which have evolved to protect our lives, but can also mislead us. This dissertation explores some selected topics from this wide subject area, mostly by testing the abilities and subjective judgments of human listeners in virtual environments. Reverberation is the gradually decaying persistence of sounds in an enclosed space which results from repeated sound reflections at surfaces. The first experiment (Chapter 2) compared how strongly people perceived reverberation in different visual situations: when they could see the room and the source which generated the sound; when they could see some room and some sound source, but the image did not match what they heard; and when they could not see anything at all. There were no indications that the visual image had any influence on this aspect of room-acoustical perception. The potential benefits of motion for judging the distance of sound sources were the focus of the second study (Chapter 3), which consists of two parts. In the first part, loudspeakers were placed at different depths in front of sitting listeners who, on command, had to either remain still or move their upper bodies sideways. This experiment demonstrated that humans can exploit motion parallax (the effect that closer objects appear faster to a moving observer than farther objects) with their ears and not just with their eyes. The second part combined a virtualisation of such sound sources with a motion platform to show that the listeners’ interpretation of this auditory motion parallax was better when they performed this lateral movement by themselves, rather than when they were moved by the apparatus or were not actually in motion at all. Two more experiments were concerned with the perception of sounds which are perceived as becoming louder over time. These have been called “looming”, as the source of such a sound might be on a collision course. One of the studies (Chapter 4) showed that western diamondback rattlesnakes (Crotalus atrox) increase the vibration speed of their rattle in response to the approach of a threatening object. It also demonstrated that human listeners perceive (virtual) snakes which engage in this behaviour as especially close, causing them to keep a greater margin of safety than they would otherwise. The other study (section 5.6) was concerned with the well-known looming bias of the sound localisation system, a phenomenon which leads to a sometimes exaggerated, sometimes more accurate perception of approaching compared to receding sounds. It attempted to find out whether this bias is affected by whether listeners hear such sounds in a virtual enclosed space or in an environment with no sound reflections. While the results were inconclusive, this experiment is noteworthy as a proof of concept: It was the first study to make use of a new real-time room-acoustical simulation system, liveRAZR, which was developed as part of this dissertation (Chapter 5). Finally, while humans have been more often studied for their unique abilities to communicate with each other and bats for their extraordinary capacity to locate objects by sound, this dissertation turns this setting of priorities on its head with the last paper (Chapter 6): Based on recordings of six pale spear-nosed bats (Phyllostomus discolor), it is a survey of the identifiably distinct vocalisations observed in their social interactions, along with a description of the different situations in which they typically occur.Das menschliche Gehör trägt zum Bewusstsein dafür bei, wo sich schallerzeugende Objekte im Raum befinden und wie die Umgebung beschaffen ist, in der sich eine Person aufhält. Diese auditorische Raumwahrnehmung interagiert auf komplexe Art und Weise mit unseren anderen Sinnen, kann von Schallreflektionen sowohl profitieren als auch durch sie behindert werden, und besitzt Mechanismen welche evolutionär entstanden sind, um unser Leben zu schützen, uns aber auch irreführen können. Diese Dissertation befasst sich mit einigen ausgewählten Themen aus diesem weiten Feld und stützt sich dabei meist auf die Testung von Wahrnehmungsfähigkeiten und subjektiver Einschätzungen menschlicher Hörer/-innen in virtueller Realität. Beim ersten Experiment (Kapitel 2) handelte es sich um einen Vergleich zwischen der Wahrnehmung von Nachhall, dem durch wiederholte Reflexionen an Oberflächen hervorgerufenen, sukzessiv abschwellenden Verbleib von Schall in einem umschlossenen Raum, unter verschiedenen visuellen Umständen: wenn die Versuchsperson den Raum und die Schallquelle sehen konnte; wenn sie irgendeinen Raum und irgendeine Schallquelle sehen konnte, dieses Bild aber vom Schalleindruck abwich; und wenn sie gar kein Bild sehen konnte. Dieser Versuch konnte keinen Einfluss eines Seheindrucks auf diesen Aspekt der raumakustischen Wahrnehmung zu Tage fördern. Mögliche Vorteile von Bewegung für die Einschätzung der Entfernung von Schallquellen waren der Schwerpunkt der zweiten Studie (Kapitel 3). Diese bestand aus zwei Teilen, wovon der erste zeigte, dass Hörer/-innen, die ihren Oberkörper relativ zu zwei in unterschiedlichen Abständen vor ihnen aufgestellten Lautsprechern auf Kommando entweder stillhalten oder seitlich bewegen mussten, im letzteren Falle von der Bewegungsparallaxe (dem Effekt, dass sich der nähere Lautsprecher relativ zum sich bewegenden Körper schneller bewegte als der weiter entfernte) profitieren konnten. Der zweite Teil kombinierte eine Simulation solcher Schallquellen mit einer Bewegungsplattform, wodurch gezeigt werden konnte, dass die bewusste Eigenbewegung für die Versuchspersonen hilfreicher war, als durch die Plattform bewegt zu werden oder gar nicht wirklich in Bewegung zu sein. Zwei weitere Versuche gingen auf die Wahrnehmung von Schallen ein, deren Ursprungsort sich nach und nach näher an den/die Hörer/-in heranbewegte. Derartige Schalle werden auch als „looming“ („anbahnend“) bezeichnet, da eine solche Annäherung bei bedrohlichen Signalen nichts Gutes ahnen lässt. Einer dieser Versuche (Kapitel 4) zeigte zunächst, dass Texas-Klapperschlangen (Crotalus atrox) die Vibrationsgeschwindigkeit der Schwanzrassel steigern, wenn sich ein bedrohliches Objekt ihnen nähert. Menschliche Hörer/-innen nahmen (virtuelle) Schlangen, die dieses Verhalten aufweisen, als besonders nahe wahr und hielten einen größeren Sicherheitsabstand ein, als sie es sonst tun würden. Der andere Versuch (Abschnitt 5.6) versuchte festzustellen, ob die wohlbekannte Neigung unserer Schallwahrnehmung, näherkommende Schalle manchmal übertrieben und manchmal genauer einzuschätzen als sich entfernende, durch Schallreflektionen beeinflusst werden kann. Diese Ergebnisse waren unschlüssig, jedoch bestand die Besonderheit dieses Versuchs darin, dass er erstmals ein neues Echtzeitsystem zur Raumakustiksimulation (liveRAZR) nutzte, welches als Teil dieser Dissertation entwickelt wurde (Kapitel 5). Abschließend (Kapitel 6) wird die Schwerpunktsetzung auf den Kopf gestellt, nach der Menschen öfter auf ihre einmaligen Fähigkeiten zur Kommunikation miteinander untersucht werden und Fledermäuse öfter auf ihre außergewöhnliches Geschick, Objekte durch Schall zu orten: Anhand von Aufnahmen von sechs Kleinen Lanzennasen (Phyllostomus discolor) fasst das Kapitel die klar voneinander unterscheidbaren Laute zusammen, die diese Tiere im sozialen Umgang miteinander produzieren, und beschreibt, in welchen Situationen diese Lauttypen typischerweise auftreten

    Perceptual strategies in active and passive hearing of neotropical bats

    Get PDF
    Basic spectral and temporal sound properties, such as frequency content and timing, are evaluated by the auditory system to build an internal representation of the external world and to generate auditory guided behaviour. Using echolocating bats as model system, I investigated aspects of spectral and temporal processing during echolocation and in relation to passive listening, and the echo-acoustic object recognition for navigation. In the first project (chapter 2), the spectral processing during passive and active hearing was compared in the echolocting bat Phyllostomus discolor. Sounds are ubiquitously used for many vital behaviours, such as communication, predator and prey detection, or echolocation. The frequency content of a sound is one major component for the correct perception of the transmitted information, but it is distorted while travelling from the sound source to the receiver. In order to correctly determine the frequency content of an acoustic signal, the receiver needs to compensate for these distortions. We first investigated whether P. discolor compensates for distortions of the spectral shape of transmitted sounds during passive listening. Bats were trained to discriminate lowpass filtered from highpass filtered acoustic impulses, while hearing a continuous white noise background with a flat spectral shape. We then assessed their spontaneous classification of acoustic impulses with varying spectral content depending on the background’s spectral shape (flat or lowpass filtered). Lowpass filtered noise background increased the proportion of highpass classifications of the same filtered impulses, compared to white noise background. Like humans, the bats thus compensated for the background’s spectral shape. In an active-acoustic version of the identical experiment, the bats had to classify filtered playbacks of their emitted echolocation calls instead of passively presented impulses. During echolocation, the classification of the filtered echoes was independent of the spectral shape of the passively presented background noise. Likewise, call structure did not change to compensate for the background’s spectral shape. Hence, auditory processing differs between passive and active hearing, with echolocation representing an independent mode with its own rules of auditory spectral analysis. The second project (chapter 3) was concerned with the accurate measurement of the time of occurrence of auditory signals, and as such also distance in echolocation. In addition, the importance of passive listening compared to echolocation turned out to be an unexpected factor in this study. To measure the distance to objects, called ranging, bats measure the time delay between an outgoing call and its returning echo. Ranging accuracy received considerable interest in echolocation research for several reasons: (i) behaviourally, it is of importance for the bat’s ability to locate objects and navigate its surrounding, (ii) physiologically, the neuronal implementation of precise measurements of very short time intervals is a challenge and (iii) the conjectured echo-acoustic receiver of bats is of interest for signal processing. Here, I trained the nectarivorous bat Glossophaga soricina to detect a jittering real target and found a biologically plausible distance accuracy of 4–7 mm, corresponding to a temporal accuracy of 20–40 μs. However, presumably all bats did not learn to use the jittering echo delay as the first and most prominent cue, but relied on passive acoustic listening first, which could only be prevented by the playback of masking noise. This shows that even a non-gleaning bat heavily relies on passive acoustic cues and that the measuring of short time intervals is difficult. This result questions other studies reporting a sub-microsecond time jitter threshold. The third project (chapter 4) linked the perception of echo-acoustic stimuli to the appropriate behavioural reactions, namely evasive flight manoeuvres around virtual objects presented in the flight paths of wild, untrained bats. Echolocating bats are able to orient in complete darkness only by analysing the echoes of their emitted calls. They detect, recognize and classify objects based on the spectro-temporal reflection pattern received at the two ears. Auditory object analysis, however, is inevitably more complicated than visual object analysis, because the one-dimensional acoustic time signal only transmits range information, i.e., the object’s distance and its longitudinal extent. All other object dimensions like width and height have to be inferred from comparative analysis of the signals at both ears and over time. The purpose of this study was to measure perceived object dimensions in wild, experimentally naïve bats by video-recording and analysing the bats’ evasive flight manoeuvres in response to the presentation of virtual echo-acoustic objects with independently manipulated acoustic parameters. Flight manoeuvres were analysed by extracting the flight paths of all passing bats. As a control to our method, we also recorded the flight paths of bats in response to a real object. Bats avoided the real object by flying around it. However, we did not find any flight path changes in response to the presentation of several virtual objects. We assume that the missing spatial extent of virtual echo-acoustic objects, due to playback from only one loudspeaker, was the main reason for the failure to evoke evasive flight manoeuvres. This study therefore emphasises for the first time the importance of the spatial dimension of virtual objects, which were up to now neglected in virtual object presentations

    Using Virtual Acoustic Space to Investigate Sound Localisation

    Get PDF

    Perception of Spatially Distributed Sound Sources

    Get PDF
    Äänen suunnan havaitsemisen tutkimukset ovat paljolti keskittyneet yhden äänilähteen tapaukseen. Useamman samanaikaisen tilassa sijaitsevan äänilähteen havaitsemisesta on kuitenkin myös tutkimuksia. Suuri osa noista tutkimuksista on tehty kuulokkeilla, mutta kaiuttimiakin on käytetty. On havaittu, että äänen leveyden havaitsemiseen vaikuttavat esimerkiksi äänenvoimakkuus, taajuus ja ajallinen pituus. Tässä diplomityössä tilaäänen havaitsemista tutkittiin tekemällä kaksi kuuntelukoetta. Painopiste oli tilassa hajautetusti sijaitsevien äänilähteiden suuntien havaitsemisen tarkkuudessa. Kokeet suoritettiin kaiuttomassa huoneessa, jossa 15 kaiutinta oli asetettu horisontaalitasoon, kaikki samalle etäisyydelle koehenkilöstä. Ensimmäisessä kuuntelukokeessa käytettiin erilaisia laajalle jakautuneita äänilähdekokonaisuuksia. Mukana oli esimerkiksi yksittäinen leveä äänilähde, jonka leveys vaihteli tapauksesta toiseen sekä leveitä äänilähteitä, joiden jakaumassa oli aukkoja. Koehenkilöiden tehtävänä oli kussakin tapauksessa oman havaintonsa mukaan erottaa, mitkä kaiuttimet lähettivät ääntä. Tulosten mukaan äänilähteessä olleita pieniä aukkoja ei havaittu täsmällisesti ja leveät äänilähteet havaittiin kapeampina kuin ne oikeasti olivat. Tulokset viittaavat myös siihen, että tilajakauman yksityiskohtien havaitsemisen tarkkuus on huonompi kuin 15 astetta, kun äänilähde on leveä. Toisessa kuuntelukokeessa testiääninä käytettiin kohinasignaaleja eri kaistanleveyksillä sekä siniaaltoja, jotka jaettiin eri kaiuttimiin. Näitä ääniä esitettiin koehenkilöille käyttämällä eri kaiutinyhdistelmiä, joiden kaiutintiheys vaihteli. Kaiutinyhdistelmiä esitettiin kaksi kerrallaan, ja koehenkilöiden tehtävä oli erottaa, kumpi kahdesta kosketusnäytöllä kuvatusta yhdistelmästä oli käytössä jälkimmäisessä testiäänessä. Tulosten mukaan havaitsemisen tarkkuus pieneni, kun kaiuttimien tiheys kasvoi. Myös kohinasignaalien kaistanleveys vaikutti havaitsemisen tarkkuuteen.Sound localization studies have mostly been concentrating on the localization of a single source. Nevertheless, there are studies on the perception of several simultaneous sound sources in spatial conditions. A large number of those experiments have been done using headphones, but also loudspeakers have been used. It has been found out that spatial width perception is affected for example by signal loudness, frequency and temporal length. In this thesis, perception of spatial sound was investigated by conducting two listening tests. The focus was on the resolution of directional perception of spatially distributed sound sources. The tests were performed in an anechoic chamber using 15 loudspeakers that were placed in the horizontal plane equidistant from the listener. In the first listening test, various sound source distributions such as sound sources with varying widths and wide sound sources with gaps in the distribution were used. The subjects were asked to distinguish which loudspeakers emit sound according to their own perception. Results show that small gaps in the sound source were not perceived accurately and wide sound sources were perceived narrower than they actually were. The results also indicate that the resolution for fine spatial details is worse than 15 degrees when the sound source is wide. In the second listening test, noise signals with different bandwidths as well as sine waves divided to the loudspeakers were used as stimuli. These were presented to the subjects using loudspeaker combinations with different loudspeaker densities. Two loudspeaker combinations at a time were presented and the task of the subjects was to discriminate which of the two shown combinations was used in producing the latter of the two sound events. The results indicate that the perception accuracy decreased as the loudspeaker density increased. Also, the bandwidth of the noise signals affected the perception accuracy

    Dynamic Echo Analysis In Echo Imaging

    Get PDF

    Do Zebra Finch Parents Fail to Recognise Their Own Offspring?

    Get PDF
    Individual recognition systems require the sender to be individually distinctive and the receiver to be able to perceive differences between individuals and react accordingly. Many studies have demonstrated that acoustic signals of almost any species contain individualized information. However, fewer studies have tested experimentally if those signals are used for individual recognition by potential receivers. While laboratory studies using zebra finches have shown that fledglings recognize their parents by their “distance call”, mutual recognition using the same call type has not been demonstrated yet. In a laboratory study with zebra finches, we first quantified between-individual acoustic variation in distance calls of fledglings. In a second step, we tested recognition of fledgling calls by parents using playback experiments. With a discriminant function analysis, we show that individuals are highly distinctive and most measured parameters show very high potential to encode for individuality. The response pattern of zebra finch parents shows that they do react to calls of fledglings, however they do not distinguish between own and unfamiliar offspring, despite individual distinctiveness. This finding is interesting in light of the observation of a high percentage of misdirected feedings in our communal breeding aviaries. Our results demonstrate the importance of adopting a receiver's perspective and suggest that variation in fledgling contact calls might not be used in individual recognition of offspring
    corecore