4,699 research outputs found
Biomechanics of hearing in katydids
Animals have evolved a vast diversity of mechanisms to detect sounds. Auditory
organs are used to detect intraspecific communicative signals and environmental
sounds relevant to survival. To hear, terrestrial animals must convert the acoustic
energy contained in the airborne sound pressure waves into neural signals. In
mammals, spectral quality is assessed by the decomposition of incoming sound waves
into elementary frequency components using a sophisticated cochlear system. Some
neotropical insects like katydids (bushcrickets) have evolved biophysical mechanisms
for auditory processing that are remarkably equivalent to those of mammals. Located
on their front legs, katydid ears are small, yet are capable of performing several of the
tasks usually associated with mammalian hearing. These tasks include air-to-liquid
impedance conversion, signal amplification, and frequency analysis. Impedance
conversion is achieved by a lever system, a mechanism functionally analogous to the
mammalian middle ear ossicles, yet morphologically distinct. In katydids, the exact
mechanisms supporting frequency analysis seem diverse, yet are seen to result in
dispersive wave propagation phenomenologically similar to that of cochlear systems.
Phylogenetically unrelated, katydids and tetrapods have evolved remarkably different
structural solutions to common biophysical problems. Here, we discuss the biophysics
of hearing in katydids and the variations observed across different species
A Thoracic Mechanism of Mild Traumatic Brain Injury Due to Blast Pressure Waves
The mechanisms by which blast pressure waves cause mild to moderate traumatic brain injury (mTBI) are an open question. Possibilities include acceleration of the head, direct passage of the blast wave via the cranium, and propagation of the blast wave to the brain via a thoracic mechanism. The hypothesis that the blast pressure wave reaches the brain via a thoracic mechanism is considered in light of ballistic and blast pressure wave research. Ballistic pressure waves, caused by penetrating ballistic projectiles or ballistic impacts to body armor, can only reach the brain via an internal mechanism and have been shown to cause cerebral effects. Similar effects have been documented when a blast pressure wave has been applied to the whole body or focused on the thorax in animal models. While vagotomy reduces apnea and bradycardia due to ballistic or blast pressure waves, it does not eliminate neural damage in the brain, suggesting that the pressure wave directly affects the brain cells via a thoracic mechanism. An experiment is proposed which isolates the thoracic mechanism from cranial mechanisms of mTBI due to blast wave exposure. Results have implications for evaluating risk of mTBI due to blast exposure and for developing effective protection
A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models
Using Hidden Markov Models (HMMs) as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks
A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models
Using Hidden Markov Models (HMMs) as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks
Cultural evolution of killer whale calls: background, mechanisms and consequences
Cultural evolution is a powerful process shaping behavioural phenotypes of many species including our own. Killer whales are one of the species with relatively well-studied vocal culture. Pods have distinct dialects comprising a mix of unique and shared call types; calves adopt the call repertoire of their matriline through social learning. We review different aspects of killer whale acoustic communication to provide insights into the cultural transmission and gene-culture co- evolution processes that produce the extreme diversity of group and population repertoires. We argue that the cultural evolution of killer whale calls is not a random process driven by steady error accumulation alone: temporal change occurs at different speeds in different components of killer whale repertoires, and constraints in call structure and horizontal transmission often degrade the phylogenetic signal. We discuss the implications from bird song and human linguistic studies, and propose several hypotheses of killer whale dialect evolution
A Matlab Tool For The Characterisation of Recorded Underwater Sound (Chorus)
The advent of low-cost, high-quality underwater sound recording systems has greatly increased the acquisition of large (multi-GB) acoustic datasets that can span from hours to several months in length. The task of scrutinizing such datasets to detect points of interest can be laborious, thus the ability to view large portions of the dataset in a single screen, or apply a level of automation to find or select individual sounds is required. A toolbox that can be continually revised, the user friendly“Characterisation Of Recorded Underwater Sound” (CHORUS) Matlab graphic user interface, was designed for processing such datasets, isolating signals, quantifying calibrated received levels and visually teasing out long and short term variations in the noise spectrum. A function to automatically detect, count and measure particular signals (e.g. blue whale sounds) is integrated in the toolbox, with the ability to include categorised calls of other marine fauna in the future. Sunrise and sunset times can be displayed in long-term average spectrograms of sea noise to reveal diurnal cycles in thevocal activity of marine fauna. A number of example studies are discussed where the toolbox has been used for analysing biological, natural physical and anthropogenic sounds
Recommended from our members
Parallels in the sequential organization of birdsong and human speech.
Human speech possesses a rich hierarchical structure that allows for meaning to be altered by words spaced far apart in time. Conversely, the sequential structure of nonhuman communication is thought to follow non-hierarchical Markovian dynamics operating over only short distances. Here, we show that human speech and birdsong share a similar sequential structure indicative of both hierarchical and Markovian organization. We analyze the sequential dynamics of song from multiple songbird species and speech from multiple languages by modeling the information content of signals as a function of the sequential distance between vocal elements. Across short sequence-distances, an exponential decay dominates the information in speech and birdsong, consistent with underlying Markovian processes. At longer sequence-distances, the decay in information follows a power law, consistent with underlying hierarchical processes. Thus, the sequential organization of acoustic elements in two learned vocal communication signals (speech and birdsong) shows functionally equivalent dynamics, governed by similar processes
The real time analysis of acoustic weld emissions using neural networks
Artificial Neural Networks (ANNs) are becoming an increasingly viable computing tool
in control scenarios where human expertise is so often required. The development of
software emulations and dedicated VLSI devices is proving successful in real world
applications where complex signal analysis, pattern recognition and discrimination are
important factors.
An established observation is that a skilled welder is able to monitor a manual arc
welding process by subconsciously changing the position of the electrode in response to
an adverse change in audible process noise. Expert systems applied to the analysis of
chaotic acoustic emissions have failed to establish any salient information due to the
inabilities of conventional architectures in processing vast quantities of erratic data at real
time speeds.
This paper describes the application of a hybrid ANN system, utilising a combination of
multiple ANN architectures and conventional techniques, to establish system parameter
acoustic signatures for subsequent on line control
Ultrasound localization microscopy to image and assess microvasculature in a rat kidney.
The recent development of ultrasound localization microscopy, where individual microbubbles (contrast agents) are detected and tracked within the vasculature, provides new opportunities for imaging the vasculature of entire organs with a spatial resolution below the diffraction limit. In stationary tissue, recent studies have demonstrated a theoretical resolution on the order of microns. In this work, single microbubbles were localized in vivo in a rat kidney using a dedicated high frame rate imaging sequence. Organ motion was tracked by assuming rigid motion (translation and rotation) and appropriate correction was applied. In contrast to previous work, coherence-based non-linear phase inversion processing was used to reject tissue echoes while maintaining echoes from very slowly moving microbubbles. Blood velocity in the small vessels was estimated by tracking microbubbles, demonstrating the potential of this technique to improve vascular characterization. Previous optical studies of microbubbles in vessels of approximately 20 microns have shown that expansion is constrained, suggesting that microbubble echoes would be difficult to detect in such regions. We therefore utilized the echoes from individual MBs as microscopic sensors of slow flow associated with such vessels and demonstrate that highly correlated, wideband echoes are detected from individual microbubbles in vessels with flow rates below 2 mm/s
- …