128 research outputs found

    Two Tools for Semi-automatic Phonetic Labelling of Large Corpora

    Get PDF
    International audienceThis paper presents two tools allowing a reliable semi-automatic labelling of large corpora : an automatic HMM-based labelling tool and an assessment and decision system to validate the automatically labelled sentences. This decision system uses the results supplied by another automatic labeller and compares their results with a parametrisable comparison process. We also propose an generic methodology to improve the labelling accuracy and to reduce the step of manual verification

    Semi-automatic phonetic labelling of large corpora

    Get PDF
    International audienceThe aim of the present paper is to present a methodology to semi-automatically label large corpora. This methodology is based on three main points: using several concurrent automatic stochastic labellers, decomposing the labelling of the whole corpus into an iterative refining process and building a labelling comparison procedure which takes into account phonologic and acoustic-phonetic rules to evaluate the similarity of the various labelling of one sentence. After having detailed these three points, we describe our HMM-based labelling tool and we describe the application of that methodology to the Swiss French POLYPHON database

    Site-Specific Rules Extraction in Precision Agriculture

    Get PDF
    El incremento sostenible en la producción alimentaria para satisfacer las necesidades de una población mundial en aumento es un verdadero reto cuando tenemos en cuenta el impacto constante de plagas y enfermedades en los cultivos. Debido a las importantes pérdidas económicas que se producen, el uso de tratamientos químicos es demasiado alto; causando contaminación del medio ambiente y resistencia a distintos tratamientos. En este contexto, la comunidad agrícola divisa la aplicación de tratamientos más específicos para cada lugar, así como la validación automática con la conformidad legal. Sin embargo, la especificación de estos tratamientos se encuentra en regulaciones expresadas en lenguaje natural. Por este motivo, traducir regulaciones a una representación procesable por máquinas está tomando cada vez más importancia en la agricultura de precisión.Actualmente, los requisitos para traducir las regulaciones en reglas formales están lejos de ser cumplidos; y con el rápido desarrollo de la ciencia agrícola, la verificación manual de la conformidad legal se torna inabordable.En esta tesis, el objetivo es construir y evaluar un sistema de extracción de reglas para destilar de manera efectiva la información relevante de las regulaciones y transformar las reglas de lenguaje natural a un formato estructurado que pueda ser procesado por máquinas. Para ello, hemos separado la extracción de reglas en dos pasos. El primero es construir una ontología del dominio; un modelo para describir los desórdenes que producen las enfermedades en los cultivos y sus tratamientos. El segundo paso es extraer información para poblar la ontología. Puesto que usamos técnicas de aprendizaje automático, implementamos la metodología MATTER para realizar el proceso de anotación de regulaciones. Una vez creado el corpus, construimos un clasificador de categorías de reglas que discierne entre obligaciones y prohibiciones; y un sistema para la extracción de restricciones en reglas, que reconoce información relevante para retener el isomorfismo con la regulación original. Para estos componentes, empleamos, entre otra técnicas de aprendizaje profundo, redes neuronales convolucionales y “Long Short- Term Memory”. Además, utilizamos como baselines algoritmos más tradicionales como “support-vector machines” y “random forests”.Como resultado, presentamos la ontología PCT-O, que ha sido alineada con otras ontologías como NCBI, PubChem, ChEBI y Wikipedia. El modelo puede ser utilizado para la identificación de desórdenes, el análisis de conflictos entre tratamientos y la comparación entre legislaciones de distintos países. Con respecto a los sistemas de extracción, evaluamos empíricamente el comportamiento con distintas métricas, pero la métrica F1 es utilizada para seleccionar los mejores sistemas. En el caso del clasificador de categorías de reglas, el mejor sistema obtiene un macro F1 de 92,77% y un F1 binario de 85,71%. Este sistema usa una red “bidirectional long short-term memory” con “word embeddings” como entrada. En relación al extractor de restricciones de reglas, el mejor sistema obtiene un micro F1 de 88,3%. Este extractor utiliza como entrada una combinación de “character embeddings” junto a “word embeddings” y una red neuronal “bidirectional long short-term memory”.<br /

    Adapting Prosody in a Text-to-Speech System

    Get PDF

    A Phonetic model of English intonation

    Get PDF
    This thesis proposes a phonetic model of English intonation which is a system for linking the phonological and F₀, descriptions of an utterance.It is argued that such a model should take the form of a rigorously defined formal system which does not require any human intuition or expertise to operate. It is also argued that this model should be capable of both analysis (F₀ to phonology) and synthesis (phonology to F₀). Existing phonetic models are reviewed and it is shown that none meet the specification for the type of formal model required.A new phonetic model is presented that has three levels of description: the F₀ level, the intermediate level and the phonological level. The intermediate level uses the three basic elements of rise,fall and connection to model F₀ contours. A mathematical equation is specified for each of these elements so that a continuous lb contour can be created from a sequence of elements. The phonological system uses H and L to describe high and low pitch accents, C to describe connection elements and B to describe the rises that occur at phrase boundaries. A fully specified grammar is described which links the intermediate and F₀ levels. A grammar is specified for linking the phonological and intermediate levels, but this is only partly complete due to problems with the phonological level of description.A computer implementation of the model is described. Most of the implementation work concentrated on the relationship between the intermediate level and the F₀ level. Results are given showing that the computer analysis system labels F₀ contours quite accurately, but is significantly worse than a human labeller. It is shown that the synthesis system produces artificial F₀ contours that are very similar to naturally occurring F₀ contoursThe thesis concludes with some indications of further work and ideas on how the computer implementation of the model could be of practical benefit in speech synthesis and recognition

    Multiple acoustic cues for Korean stops and automatic speech recognition

    Get PDF
    The purpose of this thesis is to analyse acoustic characteristics of Korean stops by way of multivariate statistical tests, and to apply the results of the analysis in Automatic Speech Recognition (ASR) of Korean. Three acoustic cues that differentiate three types of Ko¬ rean oral stops are closure duration, Voice Onset Time (VOT) and fundamental frequency (FO) of a vowel after a stop. We review the characteristics of these parameters previously reported in various phonetic studies and test their usefulness for differentiating the three types of stops on two databases, one with controlled contexts, as in other phonetic stud¬ ies, and the other a continuous speech database designed for ASR. Statistical tests on both databases confirm that the three types of stops can be differentiated by the three acoustic parameters. In order to exploit these parameters for ASR, a context dependent Hidden Markov Model (HMM) based baseline system with a short pause model is built, which results in great improvement of performance compared to other systems. For mod¬ elling of the three acoustic parameters, an automatic segmentation technique for closure and VOT is developed. Samples of each acoustic parameter are modelled with univariate and multivariate probability distribution functions. Stop probability from these models is integrated by a post-processing technique. Our results show that integration of stop prob¬ ability does not make much improvement over the results of a baseline system. However, the results suggest that stop probabilities will be useful in determining the correct hy¬ pothesis with a larger lexicon containing more minimal pairs of words that differ by the identity of just one stop

    Facial expression recognition in the wild : from individual to group

    Get PDF
    The progress in computing technology has increased the demand for smart systems capable of understanding human affect and emotional manifestations. One of the crucial factors in designing systems equipped with such intelligence is to have accurate automatic Facial Expression Recognition (FER) methods. In computer vision, automatic facial expression analysis is an active field of research for over two decades now. However, there are still a lot of questions unanswered. The research presented in this thesis attempts to address some of the key issues of FER in challenging conditions mentioned as follows: 1) creating a facial expressions database representing real-world conditions; 2) devising Head Pose Normalisation (HPN) methods which are independent of facial parts location; 3) creating automatic methods for the analysis of mood of group of people. The central hypothesis of the thesis is that extracting close to real-world data from movies and performing facial expression analysis on movies is a stepping stone in the direction of moving the analysis of faces towards real-world, unconstrained condition. A temporal facial expressions database, Acted Facial Expressions in the Wild (AFEW) is proposed. The database is constructed and labelled using a semi-automatic process based on closed caption subtitle based keyword search. Currently, AFEW is the largest facial expressions database representing challenging conditions available to the research community. For providing a common platform to researchers in order to evaluate and extend their state-of-the-art FER methods, the first Emotion Recognition in the Wild (EmotiW) challenge based on AFEW is proposed. An image-only based facial expressions database Static Facial Expressions In The Wild (SFEW) extracted from AFEW is proposed. Furthermore, the thesis focuses on HPN for real-world images. Earlier methods were based on fiducial points. However, as fiducial points detection is an open problem for real-world images, HPN can be error-prone. A HPN method based on response maps generated from part-detectors is proposed. The proposed shape-constrained method does not require fiducial points and head pose information, which makes it suitable for real-world images. Data from movies and the internet, representing real-world conditions poses another major challenge of the presence of multiple subjects to the research community. This defines another focus of this thesis where a novel approach for modeling the perception of mood of a group of people in an image is presented. A new database is constructed from Flickr based on keywords related to social events. Three models are proposed: averaging based Group Expression Model (GEM), Weighted Group Expression Model (GEM_w) and Augmented Group Expression Model (GEM_LDA). GEM_w is based on social contextual attributes, which are used as weights on each person's contribution towards the overall group's mood. Further, GEM_LDA is based on topic model and feature augmentation. The proposed framework is applied to applications of group candid shot selection and event summarisation. The application of Structural SIMilarity (SSIM) index metric is explored for finding similar facial expressions. The proposed framework is applied to the problem of creating image albums based on facial expressions, finding corresponding expressions for training facial performance transfer algorithms

    Automatic prosodic analysis for computer aided pronunciation teaching

    Get PDF
    Correct pronunciation of spoken language requires the appropriate modulation of acoustic characteristics of speech to convey linguistic information at a suprasegmental level. Such prosodic modulation is a key aspect of spoken language and is an important component of foreign language learning, for purposes of both comprehension and intelligibility. Computer aided pronunciation teaching involves automatic analysis of the speech of a non-native talker in order to provide a diagnosis of the learner&apos;s performance in comparison with the speech of a native talker. This thesis describes research undertaken to automatically analyse the prosodic aspects of speech for computer aided pronunciation teaching. It is necessary to describe the suprasegmental composition of a learner&apos;s speech in order to characterise significant deviations from a native-like prosody, and to offer some kind of corrective diagnosis. Phonological theories of prosody aim to describe the suprasegmental composition of speech..

    Quantification of advanced dementia patients’ engagement in therapeutic sessions: an automatic video based approach using computer vision and machine learning

    Get PDF
    Most individuals with advanced dementia lose the ability to communicate with the outside world through speech. This limits their ability to participate in social activities crucial to their well-being and quality of life. However, there is mounting evidence that individuals with advanced dementia can still communicate non-verbally and benefit greatly from these interactions. A major problem in facilitating the advancement of this research is of a practical and methodical nature: assessing the success of treatment is currently done by humans, prone to subjective bias and inconsistency, and it involves laborious and time consuming effort. The present work is the first attempt at exploring if automatic (artificial intelligence based) quantification of the degree of patient engagement in Adaptive Interaction sessions, a highly promising intervention developed to improve the quality of life of nonverbal individuals with advanced dementia. Hence we describe a framework which uses computer vision and machine learning as a potential first step towards answering this question. Using a real-world data set of videos of therapeutic sessions, not acquired specifically for the purposes of the present work, we demonstrate highly promising results
    corecore