413 research outputs found

    A toolbox for animal call recognition

    Get PDF
    Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems

    Robust Anomaly Detection with Applications to Acoustics and Graphs

    Get PDF
    Our goal is to develop a robust anomaly detector that can be incorporated into pattern recognition systems that may need to learn, but will never be shunned for making egregious errors. The ability to know what we do not know is a concept often overlooked when developing classifiers to discriminate between different types of normal data in controlled experiments. We believe that an anomaly detector should be used to produce warnings in real applications when operating conditions change dramatically, especially when other classifiers only have a fixed set of bad candidates from which to choose. Our approach to distributional anomaly detection is to gather local information using features tailored to the domain, aggregate all such evidence to form a global density estimate, and then compare it to a model of normal data. A good match to a recognizable distribution is not required. By design, this process can detect the "unknown unknowns" [1] and properly react to the "black swan events" [2] that can have devastating effects on other systems. We demonstrate that our system is robust to anomalies that may not be well-defined or well-understood even if they have contaminated the training data that is assumed to be non-anomalous. In order to develop a more robust speech activity detector, we reformulate the problem to include acoustic anomaly detection and demonstrate state-of-the-art performance using simple distribution modeling techniques that can be used at incredibly high speed. We begin by demonstrating our approach when training on purely normal conversational speech and then remove all annotation from our training data and demonstrate that our techniques can robustly accommodate anomalous training data contamination. When comparing continuous distributions in higher dimensions, we develop a novel method of discarding portions of a semi-parametric model to form a robust estimate of the Kullback-Leibler divergence. Finally, we demonstrate the generality of our approach by using the divergence between distributions of vertex invariants as a graph distance metric and achieve state-of-the-art performance when detecting graph anomalies with neighborhoods of excessive or negligible connectivity. [1] D. Rumsfeld. (2002) Transcript: DoD news briefing - Secretary Rumsfeld and Gen. Myers. [2] N. N. Taleb, The Black Swan: The Impact of the Highly Improbable. Random House, 2007

    The Development of Cortical Responses to the Integration of Audiovisual Speech in Infancy

    Get PDF
    In adults, the integration of audiovisual speech elicits specific higher (super-additive) or lower (sub-additive) cortical responses when compared to the responses to unisensory stimuli. Although there is evidence that the fronto-temporal network is active during perception of audiovisual speech in infancy, the development of fronto-temporal responses to audiovisual integration remains unknown. In the current study, 5-month-olds and 10-month-olds watched bimodal (audiovisual) and alternating unimodal (auditory + visual) syllables. In this context we use alternating unimodal to denote alternating auditory and visual syllables that are perceived as separate syllables by adults. Using fNIRS we measured responses over large cortical areas including the inferior frontal and superior temporal regions. We identified channels showing different responses to bimodal than alternating unimodal condition and used multivariate pattern analysis (MVPA) to decode patterns of cortical responses to bimodal (audiovisual) and alternating unimodal (auditory + visual) speech. Results showed that in both age groups integration elicits cortical responses consistent with both super- and sub-additive responses in the fronto-temporal cortex. The univariate analyses revealed that between 5 and 10 months spatial distribution of these responses becomes increasingly focal. MVPA correctly classified responses at 5 months, with key input from channels located in the inferior frontal and superior temporal channels of the right hemisphere. However, MVPA classification was not successful at 10 months, suggesting a potential cortical re-organisation of audiovisual speech perception at this age. These results show the complex and non-gradual development of the cortical responses to integration of congruent audiovisual speech in infancy

    Cue estimation for vowel perception prediction in low signal-to-noise ratios

    Get PDF
    This study investigates the signal processing required in order to allow for the evaluation of hearing perception prediction models at low signal-to-noise Ratios (SNR). It focusses on speech enhancement and the estimation of the cues from which speech may be recognized, specifically where these cues are estimated from severely degraded speech (SNR ranging from -10 dB to -3 dB). This research has application in the field of cochlear implants (CI), where a listener would hear degraded speech due to several distortions introduced by the biophysical interface (e.g. frequency and amplitude discretization). These difficulties can also be interpreted as a loss in signal quality due to a specific type of noise. The ability to investigate perception in low SNR conditions may have application in the development of CI signal processing algorithms to counter the effects of noise. In the military domain a speech signal may be degraded intentionally by enemy forces or unintentionally owing to engine noise, for example. The ability to analyse and predict perception can be used for algorithm development to counter the unintentional or intentional interference or to predict perception degradation if low SNR conditions cannot be avoided. A previously documented perception model (Svirsky, 2000) is used to illustrate that the proposed signal processing steps can indeed be used to estimate the various cues used by the perception model at SNRs successfully as low as -10 dB. AFRIKAANS : Hierdie studie ondersoek die seinprosessering wat nodig is om ’n gehoorpersepsievoorspellingmodel te evalueer by lae sein-tot-ruis-verhoudings. Hierdie studie fokus op spraakverbetering en die estimasie van spraakeienskappe wat gebruik kan word tydens spraakherkenning, spesifiek waar hierdie eienskappe beraam word vir ernstig gedegradeerde spraak (sein-tot-ruisverhoudings van -10 dB tot -3 dB). Hierdie navorsing is van toepassing in die veld van kogleêre inplantings, waar die luisteraar degradering van spraak ervaar weens die bio-fisiese koppelvlak (bv. diskrete frekwensie en amplitude). Hierdie degradering kan gesien word as ’n verlies aan seinkwaliteit weens ’n spesifieke tipe ruis. Die vermoë om persepsie te ondersoek by lae sein-tot-ruis kan toegepas word tydens die ontwikkeling van kogleêre inplantingseinprosesseringalgoritmes om die effekte van ruis teen te werk. In die militêre omgewing kan spraak deur vyandige magte gedegradeer word, of degradering van spraak kan plaasvind as gevolg van bv. enjingeraas. Die vermoë om persepsie te ondersoek en te voorspel in die teenwoordigheid van ruis kan gebruik word vir algoritme-ontwikkeling om die ruis teen te werk of om die verlies aan persepsie te voorspel waar lae sein-tot-ruis verhoudings nie vermy kan word nie. ’n Voorheen gedokumenteerde persepsiemodel (Svirsky, 2000) word gebruik om te demonstreer dat die voorgestelde seinprosesseringstappe wel suksesvol gebruik kan word om die spraakeienskappe te beraam wat deur die persepsiemodel benodig word by sein-tot-ruis verhouding so laag as -10 dB. CopyrightDissertation (MEng)--University of Pretoria, 2009.Electrical, Electronic and Computer Engineeringunrestricte

    The neurobiology of speech perception decline in aging

    Get PDF
    Speech perception difficulties are common amongst elderlies; yet the underlying neural mechanisms are still poorly understood. New empirical evidence suggesting that brain senescence may be an important contributor to these difficulties have challenged the traditional view that peripheral hearing loss was the main factor in the aetiology of these difficulties. Here we investigated the relationship between structural and functional brain senescence and speech perception skills in aging. Following audiometric evaluations, participants underwent MRI while performing a speech perception task at different intelligibility levels. As expected, with age speech perception declined, even after controlling for hearing sensitivity using an audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally, decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were found in several motor and premotor areas, including the left ventral premotor cortex and the right SMA. Agedependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal anterior insula. In this region, changes in BOLD signal had an effect on the relationship of age to speech perception skills suggesting a role for this region in maintaining speech perception in older ages perhaps by. These results provide important new insights into the neurobiology of speech perception in aging

    Transformation of a temporal speech cue to a spatial neural code in human auditory cortex

    Get PDF
    In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population’s preferred category. In a simple neural network model, simulated populations tuned to detect either temporal gaps or coincidences between spectral cues captured encoding patterns observed in real neural data. These results demonstrate that a spatial/amplitude neural code underlies the cortical representation of both spectral and temporal speech cues

    Reaction time measures of perceptual and linguistic factors in a phoneme monitoring task

    Get PDF
    • …
    corecore