565 research outputs found

    Põhiemotsioonid eestikeelses etteloetud kõnes: akustiline analüüs ja modelleerimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneDoktoritööl oli kaks eesmärki: saada teada, milline on kolme põhiemotsiooni – rõõmu, kurbuse ja viha – akustiline väljendumine eestikeelses etteloetud kõnes, ning luua neile uurimistulemustele tuginedes eestikeelsele kõnesüntesaatorile parameetrilise sünteesi jaoks emotsionaalse kõne akustilised mudelid, mis aitaksid süntesaatoril äratuntavalt nimetatud emotsioone väljendada. Kuna sünteeskõnet rakendatakse paljudes valdkondades, näiteks inimese ja masina suhtluses, multimeedias või puuetega inimeste abivahendites, siis on väga oluline, et sünteeskõne kõlaks loomulikuna, võimalikult inimese rääkimise moodi. Üks viis sünteeskõne loomulikumaks muuta on lisada sellesse emotsioone, tehes seda mudelite abil, mis annavad süntesaatorile ette emotsioonide väljendamiseks vajalikud akustiliste parameetrite väärtuste kombinatsioonid. Emotsionaalse kõne mudelite loomiseks peab teadma, kuidas emotsioonid inimkõnes hääleliselt väljenduvad. Selleks tuli uurida, kas, millisel määral ja mis suunas emotsioonid akustiliste parameetrite (näiteks põhitooni, intensiivsuse ja kõnetempo) väärtusi mõjutavad ning millised parameetrid võimaldavad emotsioone üksteisest ja neutraalsest kõnest eristada. Saadud tulemuste põhjal oli võimalik luua emotsioonide akustilisi mudeleid* ning katseisikud hindasid, milliste mudelite järgi on emotsioonid sünteeskõnes äratuntavad. Eksperiment kinnitas, et akustikaanalüüsi tulemustele tuginevate mudelitega suudab eestikeelne kõnesüntesaator rahuldavalt väljendada nii kurbust kui ka viha, kuid mitte rõõmu. Doktoritöö kajastab üht võimalikku viisi, kuidas rõõm, kurbus ja viha eestikeelses kõnes hääleliselt väljenduvad, ning esitab mudelid, mille abil emotsioone eestikeelsesse sünteeskõnesse lisada. Uurimistöö on lähtepunkt edasisele eestikeelse emotsionaalse sünteeskõne akustiliste mudelite arendamisele. * Katsemudelite järgi sünteesitud emotsionaalset kõnet saab kuulata aadressil https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494.The present doctoral dissertation had two major purposes: (a) to find out and describe the acoustic expression of three basic emotions – joy, sadness and anger – in read Estonian speech, and (b) to create, based on the resulting description, acoustic models of emotional speech, designed to help parametric synthesis of Estonian speech recognizably express the above emotions. As far as synthetic speech has many applications in different fields, such as human-machine interaction, multimedia, or aids for the disabled, it is vital that the synthetic speech should sound natural, that is, as human-like as possible. One of the ways to naturalness lies through adding emotions to the synthetic speech by means of models feeding the synthesiser with combinations of acoustic parametric values necessary for emotional expression. In order to create such models of emotional speech, it is first necessary to have a detailed knowledge of the vocal expression of emotions in human speech. For that purpose I had to investigate to what extent, if any, and in what direction emotions influence the values of speech acoustic parameters (e.g., fundamental frequency, intensity and speech rate), and which parameters enable discrimination of emotions from each other and from neutral speech. The results provided material for creating acoustic models of emotions* to be presented to evaluators, who were asked to decide which of the models helped to produce synthetic speech with recognisable emotions. The experiment proved that with models based on acoustic results, an Estonian speech synthesiser can satisfactorily express sadness and anger, while joy was not so well recognised by listeners. This doctoral dissertation describes one of the possible ways for the vocal expression of joy, sadness and anger in Estonian speech and presents some models enabling addition of emotions to Estonian synthetic speech. The study serves as a starting point for the future development of acoustic models for Estonian emotional synthetic speech. * Recorded examples of emotional speech synthesised using the test models can be accessed at https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494

    The computer synthesis of expressive three-dimensional facial character animation.

    Get PDF
    This present research is concerned with the design, development and implementation of three-dimensional computer-generated facial images capable of expression gesture and speech. A review of previous work in chapter one shows that to date the model of computer-generated faces has been one in which construction and animation were not separated and which therefore possessed only a limited expressive range. It is argued in chapter two that the physical description of the face cannot be seen as originating from a single generic mould. Chapter three therefore describes data acquisition techniques employed in the computer generation of free-form surfaces which are applicable to three-dimensional faces. Expressions are the result of the distortion of the surface of the skin by the complex interactions of bone, muscle and skin. Chapter four demonstrates with static images and short animation sequences in video that a muscle model process algorithm can simulate the primary characteristics of the facial muscles. Three-dimensional speech synchronization was the most complex problem to achieve effectively. Chapter five describes two successful approaches: the direct mapping of mouth shapes in two dimensions to the model in three dimensions, and geometric distortions of the mouth created by the contraction of specified muscle combinations. Chapter six describes the implementation of software for this research and argues the case for a parametric approach. Chapter seven is concerned with the control of facial articulations and discusses a more biological approach to these. Finally chapter eight draws conclusions from the present research and suggests further extensions

    Development of Markerless Systems for Automatic Analysis of Movements and Facial Expressions: Applications in Neurophysiology

    Get PDF
    This project is focused on the development of markerless methods for studying facial expressions and movements in neurology, focusing on Parkinson’s disease (PD) and disorders of consciousness (DOC). PD is a neurodegenerative illness that affects around 2% of the population over 65 years old. Impairments of voice/speech are among the main signs of PD. This set of impairments is called hypokinetic dysarthria, because of the reduced range of movements involved in speech. This reduction can be visible also in other facial muscles, leading to a hypomimia. Despite the high percentage of patients that suffer from dysarthria and hypomimia, only a few of them undergo speech therapy with the aim to improve the dynamic of articulatory/facial movements. The main reason is the lack of low cost methodologies that could be implemented at home. DOC after coma are Vegetative State (VS), characterized by the absence of self-awareness and awareness of the environment, and Minimally Conscious State (MCS), in which certain behaviors are sufficiently reproducible to be distinguished from reflex responses. The differential diagnosis between VS and MCS can be hard and prone to a high rate of misdiagnosis (~40%). This differential diagnosis is mainly based on neuro-behavioral scales. A key role to plan the rehabilitation in DOC patients is played by the first diagnosis after coma. In fact, MCS patients are more prone to a consciousness recovery than VS patients. Concerning PD the aim is the development of contactless systems that could be used to study symptoms related to speech and facial movements/expressions. The methods proposed here, based on acoustical analysis and video processing techniques could support patients during speech therapy also at home. Concerning DOC patients the project is focused on the assessment of reflex and cognitive responses to standardized stimuli. This would allow objectifying the perceptual analysis performed by clinicians

    Relations between music and speech from the perspectives of dynamics, timbre and pitch

    Get PDF
    Despite the vast amount of scholarly effort to compare music and speech from a wide range of perspectives, some of the most fundamental aspects of music and speech still remain unexplored. This PhD thesis tackles three aspects essential to the understanding of the relations between music and speech: dynamics, timbre and pitch. In terms of dynamics, previous research has used perception experiments where dynamics is represented by acoustic intensity, with little attention to the fact that dynamics is an important mechanism of motor movements in both music performance and speech production. Therefore, the first study of this thesis compared the dynamics of music and speech using production experiments with a focus on motor movements: finger force in affective piano performance was used as an index of music dynamics and articulatory effort in affective Mandarin speech was used as an index of speech dynamics. The results showed both similarities and differences between the two domains. With regard to timbre, there has been a long-held observation that the timbre of musical instruments mimics human voice, particularly in terms of conveying emotions. However, little research has been done to empirically investigate the emotional connotations of the timbre of isolated sounds of musical instruments in relation to affective human speech. Hence, the second study explored this issue using behavioral and ERP methods. The results largely supported previous observations, although some fundamental differences also existed. In terms of pitch, some studies have mentioned that music could have close relations with speech with regard to pitch prominence and expectation patterns. Nevertheless, the functional differences of pitch in music and speech could also imply that speech does not necessarily follow the same pitch patterns as music in conveying prominence and expectation. So far there is little empirical evidence to either support or refute the aforementioned observations. Hence the third study examined this issue. The results showed the differences outweighed the similarities between music and speech in terms of pitch prominence and expectation. In conclusion, from three perspectives essential to music and speech, this thesis has shed new light on the overlapping yet distinct relations between the two domains

    Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution

    Get PDF
    In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic units’ consonant/vowel (CV) derived from Moroccan Arabic dialect emotional database (MADED) corpus. Thereafter, the data which has been collected is then trained by a k-nearest-neighbor (KNN) classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions

    Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference

    VOCAL BIOMARKERS OF CLINICAL DEPRESSION: WORKING TOWARDS AN INTEGRATED MODEL OF DEPRESSION AND SPEECH

    Get PDF
    Speech output has long been considered a sensitive marker of a person’s mental state. It has been previously examined as a possible biomarker for diagnosis and treatment response for certain mental health conditions, including clinical depression. To date, it has been difficult to draw robust conclusions from past results due to diversity in samples, speech material, investigated parameters, and analytical methods. Within this exploratory study of speech in clinically depressed individuals, articulatory and phonatory behaviours are examined in relation to psychomotor symptom profiles and overall symptom severity. A systematic review provided context from the existing body of knowledge on the effects of depression on speech, and provided context for experimental setup within this body of work. Examinations of vowel space, monophthong, and diphthong productions as well as a multivariate acoustic analysis of other speech parameters (e.g., F0 range, perturbation measures, composite measures, etc.) are undertaken with the goal of creating a working model of the effects of depression on speech. Initial results demonstrate that overall vowel space area was not different between depressed and healthy speakers, but on closer inspection, this was due to more specific deficits seen in depressed patients along the first formant (F1) axis. Speakers with depression were more likely to produce centralised vowels along F1, as compared to F2—and this was more pronounced for low-front vowels, which are more complex given the degree of tongue-jaw coupling required for production. This pattern was seen in both monophthong and diphthong productions. Other articulatory and phonatory measures were inspected in a factor analysis as well, suggesting additional vocal biomarkers for consideration in diagnosis and treatment assessment of depression—including aperiodicity measures (e.g., higher shimmer and jitter), changes in spectral slope and tilt, and additive noise measures such as increased harmonics-to-noise ratio. Intonation was also affected by diagnostic status, but only for specific speech tasks. These results suggest that laryngeal and articulatory control is reduced by depression. Findings support the clinical utility of combining Ellgring and Scherer’s (1996) psychomotor retardation and social-emotional hypotheses to explain the effects of depression on speech, which suggest observed changes are due to a combination of cognitive, psycho-physiological and motoric mechanisms. Ultimately, depressive speech is able to be modelled along a continuum of hypo- to hyper-speech, where depressed individuals are able to assess communicative situations, assess speech requirements, and then engage in the minimum amount of motoric output necessary to convey their message. As speakers fluctuate with depressive symptoms throughout the course of their disorder, they move along the hypo-hyper-speech continuum and their speech is impacted accordingly. Recommendations for future clinical investigations of the effects of depression on speech are also presented, including suggestions for recording and reporting standards. Results contribute towards cross-disciplinary research into speech analysis between the fields of psychiatry, computer science, and speech science
    corecore