1,303 research outputs found

    Acoustic Measures of the Singing Voice in Secondary School Students

    Get PDF
    Descriptions of voice quality in vocal and choral music often rely on subjective terminology, which may be perceived differently between individuals. As access to software used in acoustic measurement becomes more widespread and affordable, music educators can potentially combine traditional descriptive terminology with objective acoustic descriptors and data, which may improve both teaching and singing. The secondary school choral music educator has specific challenges, in that they teach students who experience drastic physical and acoustic changes of the voice as they grow from children to adults. The purpose of this study was to objectively analyze various acoustic characteristics of the singing voice in secondary school students. In this study, secondary school students (N = 157) from three different schools who were enrolled in choir (n = 89) or instrumental music classes (n = 68) recorded voice samples singing five vowels, /i/, /e/, /a/, /o/, and /u/. Research questions investigated (a) descriptive statistics for vibrato rate, vibrato extent, singing power ratio, and amplitude differences between specific harmonic pairs; (b) differences in vibrato rate and extent between students enrolled in choir and students not enrolled in choir; (c) between-subjects and within-subjects comparisons in singing power ratio (SPR) between singers based on choir enrollment and voice part for five different vowel productions; and (d) between-subjects and within-subjects comparisons for differences in amplitude between specific harmonics between singers based on choir enrollment and voice part for five different vowel productions. Vibrato rate (M = 4.58 Hz, SD = 1.45 Hz ), vibrato extent (M = 1.45% or 25 cents, SD = 0.86% or 15 cents), and SPR (M = 24.67 dB, SD = 10 dB), and various amplitude differences were not different between students enrolled in choir and students not enrolled in choir. There were significant within-subjects differences for singers by vowel, as well as significant within-subjects interactions for vowel and voice part with SPR and amplitude differences between harmonic pairs. There were also significant differences between voice parts for amplitude difference between harmonic pairs. Implications for choral music educators and suggestions for further research based on these findings were discussed in Chapter 5

    Effectiveness of manual gesture treatment on residual /r/ articulation errors

    Get PDF
    The functional speech sound disorder, American English /r/ articulation errors, presents a unique and confounding clinical challenge as therapy resistant residual errors persist into adolescence and adulthood in many cases. Finding paucity of empirical research for /r/ treatment, evidence-based practice (EBP) exploration in motor-related disorders informed clinical practice and research directions. This study investigated the efficacy of manual mimicry (a kinesthetic, gestural, and visual cue) in treating intractable /r/ errors in a young adult using a single subject ABAB design. Perceptual accuracy judgments of three types of listeners (experts, graduate clinician, and naĂŻve listeners) indicated a positive treatment effect of manual mimicry cueing on vocalic /r/ sound productions. Electropalatograpy (EPG) outcome measures showed limited ability to accurately reflect perceptual changes quantitatively. These findings from an exploratory study provide initial evidence that perceptual saliency of /r/ productions may be potentially remediated using a kinesthetic, gestural, and visual cue during treatment

    Perceptual and acoustic assessment of a child’s speech before and after laryngeal web surgery

    Get PDF
    The aim of this paper was to point to the importance of early diagnostics and surgery in patients with laryngeal web in order to achieve normal breathing, as well as to stress the need for an interdisciplinary approach to observing the quality of voice and prosodic features at an early age. The subject under consideration was a 6.5-year-old girl who had previously been diagnosed with irregular breathing (R06). An endoscopic exam revealed a laryngeal web between the vocal folds and the fact that the posterior intercartilaginous section of the glottis of the child’s larynx was in order (normal). The child’s speech had been recorded in the acoustic studio, both before and after the vocal-fold surgery (six and twelve months later). Due to severe dysphonia, difficulties with breathing, and frequent noisy breathing (stridor), we recorded only the phonation of the vowel [a], as well as spontaneous speech before the surgery. In addition, there was intense glottic and supraglottic strain before the surgery, which in phonetics corresponds to the term laryngeal and supralaryngeal strain and pathologically creaky whispery phonation (according to VPA protocol). This strain was visible in the area of the chest, neck, and head, as well as audible in the voice quality. Acoustic analysis showed that the average F0 for the vowel [a] was remarkably high (442 Hz), and the pathological values were established using the following measures: local jitter (1.68%), local shimmer (0.7 dB), and the harmonic to noise ratio (17.6 dB). In contrast, six months after the surgery, the pitch for [a] was half the value of the preoperative one (220.5 Hz, p < 0.001), and the local jitter for all vowels (0.30-0.47%) and the harmonic to noise ratio (22.46 dB, p = 0.05) was within the normal range. There was also significant improvement in the F0 values, standard deviation of F0, and minimum and maximum F0 values. The average and median F0 values in spontaneous speech were also lower postoperatively. The voice quality showed a more balanced timbre (LTASS), particularly after one year. Some other prosodic features also showed improvement

    Pitch-Informed Solo and Accompaniment Separation

    Get PDF
    ï»żDas Thema dieser Dissertation ist die Entwicklung eines Systems zur Tonhöhen-informierten Quellentrennung von Musiksignalen in Soloinstrument und Begleitung. Dieses ist geeignet, die dominanten Instrumente aus einem MusikstĂŒck zu isolieren, unabhĂ€ngig von der Art des Instruments, der Begleitung und Stilrichtung. Dabei werden nur einstimmige Melodieinstrumente in Betracht gezogen. Die Musikaufnahmen liegen monaural vor, es kann also keine zusĂ€tzliche Information aus der Verteilung der Instrumente im Stereo-Panorama gewonnen werden. Die entwickelte Methode nutzt Tonhöhen-Information als Basis fĂŒr eine sinusoidale Modellierung der spektralen Eigenschaften des Soloinstruments aus dem Musikmischsignal. Anstatt die spektralen Informationen pro Frame zu bestimmen, werden in der vorgeschlagenen Methode Tonobjekte fĂŒr die Separation genutzt. Tonobjekt-basierte Verarbeitung ermöglicht es, zusĂ€tzlich die NotenanfĂ€nge zu verfeinern, transiente Artefakte zu reduzieren, gemeinsame Amplitudenmodulation (Common Amplitude Modulation CAM) einzubeziehen und besser nichtharmonische Elemente der Töne abzuschĂ€tzen. Der vorgestellte Algorithmus zur Quellentrennung von Soloinstrument und Begleitung ermöglicht eine Echtzeitverarbeitung und ist somit relevant fĂŒr den praktischen Einsatz. Ein Experiment zur besseren Modellierung der ZusammenhĂ€nge zwischen Magnitude, Phase und Feinfrequenz von isolierten Instrumententönen wurde durchgefĂŒhrt. Als Ergebnis konnte die KontinuitĂ€t der zeitlichen EinhĂŒllenden, die InharmonizitĂ€t bestimmter Musikinstrumente und die Auswertung des Phasenfortschritts fĂŒr die vorgestellte Methode ausgenutzt werden. ZusĂ€tzlich wurde ein Algorithmus fĂŒr die Quellentrennung in perkussive und harmonische Signalanteile auf Basis des Phasenfortschritts entwickelt. Dieser erreicht ein verbesserte perzeptuelle QualitĂ€t der harmonischen und perkussiven Signale gegenĂŒber vergleichbaren Methoden nach dem Stand der Technik. Die vorgestellte Methode zur Klangquellentrennung in Soloinstrument und Begleitung wurde zu den Evaluationskampagnen SiSEC 2011 und SiSEC 2013 eingereicht. Dort konnten vergleichbare Ergebnisse im Hinblick auf perzeptuelle Bewertungsmaße erzielt werden. Die QualitĂ€t eines Referenzalgorithmus im Hinblick auf den in dieser Dissertation beschriebenen Instrumentaldatensatz ĂŒbertroffen werden. Als ein Anwendungsszenario fĂŒr die Klangquellentrennung in Solo und Begleitung wurde ein Hörtest durchgefĂŒhrt, der die QualitĂ€tsanforderungen an Quellentrennung im Kontext von Musiklernsoftware bewerten sollte. Die Ergebnisse dieses Hörtests zeigen, dass die Solo- und Begleitspur gemĂ€ĂŸ unterschiedlicher QualitĂ€tskriterien getrennt werden sollten. Die Musiklernsoftware Songs2See integriert die vorgestellte Klangquellentrennung bereits in einer kommerziell erhĂ€ltlichen Anwendung.This thesis addresses the development of a system for pitch-informed solo and accompaniment separation capable of separating main instruments from music accompaniment regardless of the musical genre of the track, or type of music accompaniment. For the solo instrument, only pitched monophonic instruments were considered in a single-channel scenario where no panning or spatial location information is available. In the proposed method, pitch information is used as an initial stage of a sinusoidal modeling approach that attempts to estimate the spectral information of the solo instrument from a given audio mixture. Instead of estimating the solo instrument on a frame by frame basis, the proposed method gathers information of tone objects to perform separation. Tone-based processing allowed the inclusion of novel processing stages for attack refinement, transient interference reduction, common amplitude modulation (CAM) of tone objects, and for better estimation of non-harmonic elements that can occur in musical instrument tones. The proposed solo and accompaniment algorithm is an efficient method suitable for real-world applications. A study was conducted to better model magnitude, frequency, and phase of isolated musical instrument tones. As a result of this study, temporal envelope smoothness, inharmonicty of musical instruments, and phase expectation were exploited in the proposed separation method. Additionally, an algorithm for harmonic/percussive separation based on phase expectation was proposed. The algorithm shows improved perceptual quality with respect to state-of-the-art methods for harmonic/percussive separation. The proposed solo and accompaniment method obtained perceptual quality scores comparable to other state-of-the-art algorithms under the SiSEC 2011 and SiSEC 2013 campaigns, and outperformed the comparison algorithm on the instrumental dataset described in this thesis.As a use-case of solo and accompaniment separation, a listening test procedure was conducted to assess separation quality requirements in the context of music education. Results from the listening test showed that solo and accompaniment tracks should be optimized differently to suit quality requirements of music education. The Songs2See application was presented as commercial music learning software which includes the proposed solo and accompaniment separation method

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy. This edition celebrates twenty years of uninterrupted and succesfully research in the field of voice analysis

    Music contact and language contact: A proposal for comparative research

    Get PDF
    The concept of convergence, from the study of language contact, provides a model for better understanding interactions between cognitive systems of the same type (for example, in bilingualism, subsystem instantiations of the same kind of knowledge representation and its associated processing mechanisms). For a number of reasons, musical ability is the domain that allows for the most interesting comparisons and contrasts with language in this area of research. Both cross-language and cross-musical idiom interactions show a vast array of different kinds of mutual influence, all of which are highly productive, ranging from so-called transfer effects to total replacement (attrition of the replaced subsystem). The study of music contact should also help investigators conceptualize potential structural parallels between separate mental faculties, most importantly, it would seem, between those that appear to share component competence and processing modules in common. The first part of the proposal is to determine if the comparison between the two kinds of convergence (in language and in music) is a useful way of thinking about how properties of each system are similar, analogous, different and so forth. This leads to a more general discussion about the design features of mental faculties, what might define them “narrowly,” for example

    Pan European Voice Conference - PEVOC 11

    Get PDF
    The Pan European VOice Conference (PEVOC) was born in 1995 and therefore in 2015 it celebrates the 20th anniversary of its establishment: an important milestone that clearly expresses the strength and interest of the scientific community for the topics of this conference. The most significant themes of PEVOC are singing pedagogy and art, but also occupational voice disorders, neurology, rehabilitation, image and video analysis. PEVOC takes place in different European cities every two years (www.pevoc.org). The PEVOC 11 conference includes a symposium of the Collegium Medicorum Theatri (www.comet collegium.com

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    A feasibility study of visual feedback speech therapy for nasal speech associated with velopharyngeal dysfunction

    No full text
    Nasal speech associated with velopharyngeal dysfunction (VPD) is seen in children and adults with cleft palate and other conditions that affect soft palate function, with negative effects on quality of life. Treatment options include surgery and prosthetics depending on the nature of the problem. Speech therapy is rarely offered as an alternative treatment as evidence from previous studies is weak. However there is evidence that visual biofeedback approaches are beneficial in other speech disorders and that this approach could benefit individuals with nasal speech who demonstrate potential for improved speech. Theories of learning and feedback also lend support to the view that a combined feedback approach would be most suitable. This feasibility study therefore aimed to develop and evaluate Visual Feedback Therapy (VFTh), a new behavioural speech therapy intervention, incorporating speech activities supported by visual biofeedback and performance feedback, for individuals with mild to moderate nasal speech. Evaluation included perceptual, instrumental and quality of life measures. Eighteen individuals with nasal speech were recruited from a regional cleft palate centre and twelve completed the study, six female and six male, eleven children (7 to 13 years) and one adult, (43 years). Six participants had repaired cleft palate and six had VPD but no cleft. Participants received 8 sessions of VFTh from one therapist. The findings suggest that that the intervention is feasible but some changes are required, including participant screening for adverse response and minimising disruptions to intervention scheduling. In blinded evaluation there was considerable variation in individual results but positive changes occurred in at least one speech symptom between pre and post-intervention assessment for eight participants. Seven participants also showed improved nasalance scores and seven had improved quality of life scores. This small study has provided important information about the feasibility of delivering and evaluating VFTh. It suggests that VFTh shows promise as an alternative treatment option for nasal speech but that further preliminary development and evaluation is required before larger scale research is indicated
    • 

    corecore