1,104 research outputs found

    Determination of Formant Features in Czech and Slovak for GMM Emotional Speech Classifier

    Get PDF
    The paper is aimed at determination of formant features (FF) which describe vocal tract characteristics. It comprises analysis of the first three formant positions together with their bandwidths and the formant tilts. Subsequently, the statistical evaluation and comparison of the FF was performed. This experiment was realized with the speech material in the form of sentences of male and female speakers expressing four emotional states (joy, sadness, anger, and a neutral state) in Czech and Slovak languages. The statistical distribution of the analyzed formant frequencies and formant tilts shows good differentiation between neutral and emotional styles for both voices. Contrary to it, the values of the formant 3-dB bandwidths have no correlation with the type of the speaking style or the type of the voice. These spectral parameters together with the values of the other speech characteristics were used in the feature vector for Gaussian mixture models (GMM) emotional speech style classifier that is currently developed. The overall mean classification error rate achieves about 18 %, and the best obtained error rate is 5 % for the sadness style of the female voice. These values are acceptable in this first stage of development of the GMM classifier that should be used for evaluation of the synthetic speech quality after applied voice conversion and emotional speech style transformation

    Opening Access to Visual Exploration of Audiovisual Digital Biomarkers: an OpenDBM Analytics Tool

    Full text link
    Digital biomarkers (DBMs) are a growing field and increasingly tested in the therapeutic areas of psychiatric and neurodegenerative disorders. Meanwhile, isolated silos of knowledge of audiovisual DBMs use in industry, academia, and clinics hinder their widespread adoption in clinical research. How can we help these non-technical domain experts to explore audiovisual digital biomarkers? The use of open source software in biomedical research to extract patient behavior changes is growing and inspiring a shift toward accessibility to address this problem. OpenDBM integrates several popular audio and visual open source behavior extraction toolkits. We present a visual analysis tool as an extension of the growing open source software, OpenDBM, to promote the adoption of audiovisual DBMs in basic and applied research. Our tool illustrates patterns in behavioral data while supporting interactive visual analysis of any subset of derived or raw DBM variables extracted through OpenDBM.Comment: 6 pages, 2 figures, 2022 IEEE VIS Workshop - Visualization in BioMedical A

    EEG-based emotion classification using spiking neural networks

    Get PDF
    A novel method of using the spiking neural networks (SNNs) and the electroencephalograph (EEG) processing techniques to recognize emotion states is proposed in this paper. Three algorithms including discrete wavelet transform (DWT), variance and fast Fourier transform (FFT) are employed to extract the EEG signals, which are further taken by the SNN for the emotion classification. Two datasets, i.e., DEAP and SEED, are used to validate the proposed method. For the former dataset, the emotional states include arousal, valence, dominance and liking where each state is denoted as either high or low status. For the latter dataset, the emotional states are divided into three categories (negative, positive and neutral). Experimental results show that by using the variance data processing technique and SNN, the emotion states of arousal, valence, dominance and liking can be classified with accuracies of 74%, 78%, 80% and 86.27% for the DEAP dataset, and an overall accuracy is 96.67% for the SEED dataset, which outperform the FFT and DWT processing methods. In the meantime, this work achieves a better emotion classification performance than the benchmarking approaches, and also demonstrates the advantages of using SNN for the emotion state classifications

    2017 Annual Research Symposium Abstract Book

    Get PDF
    2017 annual volume of abstracts for science research projects conducted by students at Trinity College

    Econometrics meets sentiment : an overview of methodology and applications

    Get PDF
    The advent of massive amounts of textual, audio, and visual data has spurred the development of econometric methodology to transform qualitative sentiment data into quantitative sentiment variables, and to use those variables in an econometric analysis of the relationships between sentiment and other variables. We survey this emerging research field and refer to it as sentometrics, which is a portmanteau of sentiment and econometrics. We provide a synthesis of the relevant methodological approaches, illustrate with empirical results, and discuss useful software

    Thirty-second Annual Symposium of Trinity College Undergraduate Research

    Get PDF
    2019 annual volume of abstracts for science research projects conducted by students at Trinity College

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy
    corecore