1,410 research outputs found

    Improved status following behavioural intervention in a case of severe dysarthria with stroke aetiology

    Get PDF
    There is little published intervention outcome literature concerning dysarthria acquired from stroke. Single case studies have the potential to provide more detailed specification and interpretation than is generally possible with larger participant numbers and are thus informative for clinicians who may deal with similar cases. Such research also contributes to the future planning of larger scale investigations. Behavioural intervention is described which was carried out with a man with severe dysarthria following stroke, beginning at seven and ending at nine months after stroke. Pre-intervention stability between five and seven months contrasted with significant improvements post-intervention on listener-rated measures of word and reading intelligibility and communication effectiveness in conversation. A range of speech analyses were undertaken (comprising of rate, pause and intonation characteristics in connected speech and phonetic transcription of single word production), with the aim of identifying components of speech which might explain the listeners’ perceptions of improvement. Pre- and post intervention changes could be detected mainly in parameters related to utterance segmentation and intonation. The basis of improvement in dysarthria following intervention is complex, both in terms of the active therapeutic dimensions and also the specific speech alterations which account for changes to intelligibility and effectiveness. Single case results are not necessarily generalisable to other cases and outcomes may be affected by participant factors and therapeutic variables, which are not readily controllable

    Advanced automatic mixing tools for music

    Get PDF
    PhDThis thesis presents research on several independent systems that when combined together can generate an automatic sound mix out of an unknown set of multi‐channel inputs. The research explores the possibility of reproducing the mixing decisions of a skilled audio engineer with minimal or no human interaction. The research is restricted to non‐time varying mixes for large room acoustics. This research has applications in dynamic sound music concerts, remote mixing, recording and postproduction as well as live mixing for interactive scenes. Currently, automated mixers are capable of saving a set of static mix scenes that can be loaded for later use, but they lack the ability to adapt to a different room or to a different set of inputs. In other words, they lack the ability to automatically make mixing decisions. The automatic mixer research depicted here distinguishes between the engineering mixing and the subjective mixing contributions. This research aims to automate the technical tasks related to audio mixing while freeing the audio engineer to perform the fine‐tuning involved in generating an aesthetically‐pleasing sound mix. Although the system mainly deals with the technical constraints involved in generating an audio mix, the developed system takes advantage of common practices performed by sound engineers whenever possible. The system also makes use of inter‐dependent channel information for controlling signal processing tasks while aiming to maintain system stability at all times. A working implementation of the system is described and subjective evaluation between a human mix and the automatic mix is used to measure the success of the automatic mixing tools

    Altered Sensory Feedback in Speech

    Get PDF
    This chapter reviews the effects temporal, spectral and intensity perturbations to auditory and vibratory feedback have on fluent speakers and PWS. Early work on Delayed Auditory Feedback (DAF) with fluent speakers, showed that speech errors arise, speakers increase voice level, speech is slowed, medial vowels in syllables are elongated and pitch is monotone. It has been claimed that short-delay DAF produces effects on speech fluency, rate and naturalness that are as good as other forms of altered sensory feedback although this has been contested. Another alternative would be to reduce dosage by ramping intensity of altered auditory feedback down after a dysfluency provided speech is fluent and switch it to its full level when the next episode of stuttering occurs. Overall, temporal delays to speech feedback have a robust effect on fluent speakers and PWS. In fluent speakers, DAF induces a range of disruptions to speech

    Automatic music genre classification

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfillment of the requirements for the degree of Master of Science. 2014.No abstract provided

    Effects of moderate-level sound exposure on behavioral thresholds in chinchillas

    Get PDF
    Normal audiometric thresholds following noise exposure have generally been considered as an indication of a recovered cochlea and intact peripheral auditory system, yet recent animal work has challenged this classic assumption. Moderately noise-exposed animals have been shown to have permanent loss of synapses on inner hair cells (IHCs) and permanent damage to auditory nerve fibers (ANFs), specifically the low-spontaneous rate fibers (low-SR), despite normal electrophysiological thresholds. Loss of cochlear synapses, known as cochlear synaptopathy, disrupts auditory-nerve signaling, which may result in perceptual speech deficits in noise despite normal audiometric thresholds. Perceptual deficit studies in humans have shown evidence consistent with the idea of cochlear synaptopathy. To date, there has been no direct evidence linking cochlear synaptopathy and perceptual deficits. Our research aims to develop a cochlear synaptopathy model in chinchilla, similar to previously established mouse and guinea pig models, to provide a model in which the effects of cochlear synaptopathy on behavioral and physiological measures of low-frequency temporal coding can be explored. ^ Positive-reinforcement operant-conditioning was used to train animals to perform auditory detection behavioral tasks for four frequencies: 0.5, 1, 2, and 4 kHz. Our goal was to evaluate the detection abilities of chinchillas for tone-in-noise and sinusoidal amplitude modulated (SAM) tone behavioral tasks, which are tasks thought to rely on low-SR ANFs for encoding. Testing was performed before and after exposure to an octave-band noise exposure centered at 1 kHz for 2 hours at 98.5 dB SPL. This noise exposure produced the synaptopathy phenotype in naïve chinchillas, based on auditory-brainstem responses (ABRs), otoacoustic emissions (OAEs) and histological analyses. Threshold shift and inferred synaptopathy was determined from ABR and OAE measures in our behavioral animals. ^ Overall, we have shown that chinchillas, similar to mice and guinea pigs, can display cochlear synaptopathy phenotype following moderate-level sound exposure. This finding was seen in naïve exposed chinchillas, but our results suggest the susceptibility to noise can vary between naïve and behavioral cohorts because minimal physiological evidence for synaptopathy was observed in the behavioral group. Hearing sensitivity determined by a tone-in-quiet behavioral task on normal hearing chinchillas followed trends reported previously, and supported the lack of permanent threshold shift following moderate noise exposure. As we expected, thresholds determined in a tone-in-noise behavioral task were higher than thresholds measured in quiet. Behavioral thresholds measured in noise after moderate noise exposure did not show threshold shifts relative to pre-exposure thresholds in noise. As expected, chinchillas were more sensitive at detecting fully modulated SAM-tone signals than less modulated, with individual modulation depth thresholds falling within previously reported mammalian ranges. ^ Although we have only been able to confirm cochlear synaptopathy in pilot assays with naïve animals so far (i.e., not in the pilot behavioral animals), this project has developed an awake protocol for moderate-level noise exposure, an extension to our lab’s previous experience with high-level permanent damage noise exposure under anesthesia. Also, we successfully established chinchilla behavioral training and testing protocols on several auditory tasks, a new methodology to our laboratory, which we hope will ultimately allow us to identify changes in auditory perception resulting from moderate-level noise exposure

    Ambisonics

    Get PDF
    This open access book provides a concise explanation of the fundamentals and background of the surround sound recording and playback technology Ambisonics. It equips readers with the psychoacoustical, signal processing, acoustical, and mathematical knowledge needed to understand the inner workings of modern processing utilities, special equipment for recording, manipulation, and reproduction in the higher-order Ambisonic format. The book comes with various practical examples based on free software tools and open scientific data for reproducible research. The book’s introductory section offers a perspective on Ambisonics spanning from the origins of coincident recordings in the 1930s to the Ambisonic concepts of the 1970s, as well as classical ways of applying Ambisonics in first-order coincident sound scene recording and reproduction that have been practiced since the 1980s. As, from time to time, the underlying mathematics become quite involved, but should be comprehensive without sacrificing readability, the book includes an extensive mathematical appendix. The book offers readers a deeper understanding of Ambisonic technologies, and will especially benefit scientists, audio-system and audio-recording engineers. In the advanced sections of the book, fundamentals and modern techniques as higher-order Ambisonic decoding, 3D audio effects, and higher-order recording are explained. Those techniques are shown to be suitable to supply audience areas ranging from studio-sized to hundreds of listeners, or headphone-based playback, regardless whether it is live, interactive, or studio-produced 3D audio material

    Perceptual Mixing for Musical Production

    Get PDF
    PhDA general model of music mixing is developed, which enables a mix to be evaluated as a set of acoustic signals. A second model describes the mixing process as an optimisation problem, in which the errors are evaluated by comparing sound features of a mix with those of a reference mix, and the parameters are the controls on the mixing console. Initial focus is placed on live mixing, where the practical issues of: live acoustic sources, multiple listeners, and acoustic feedback, increase the technical burden on the mixing engineer. Using the two models, a system is demonstrated that takes as input reference mixes, and automatically sets the controls on the mixing console to recreate their objective, acoustic sound features for all listeners, taking into account the practical issues outlined above. This reduces the complexity of mixing live music to that of recorded music, and unifies future mixing research. Sound features evaluated from audio signals are shown to be unsuitable for describing a mix, because they do not incorporate the effects of listening conditions, or masking interactions between sounds. Psychophysical test methods are employed to develop a new perceptual sound feature, termed the loudness balance, which is the first loudness feature to be validated for musical sounds. A novel, perceptual mixing system is designed, which allows users to directly control the loudness balance of the sounds they are mixing, for both live and recorded music, and which can be extended to incorporate other perceptual features. The perceptual mixer is also employed as an analytical tool, to allow direct measurement of mixing best practice, to provide fully-automatic mixing functionality, and is shown to be an improvement over current heuristic models. Based on the conclusions of the work, a framework for future automatic mixing is provided, centred on perceptual sound features that are validated using psychophysical method
    • 

    corecore