3 research outputs found

    Web-based environment for user generation of spoken dialog for virtual assistants

    Get PDF
    In this paper, a web-based spoken dialog generation environment which enables users to edit dialogs with a video virtual assistant is developed and to also select the 3D motions and tone of voice for the assistant. In our proposed system, “anyone” can “easily” post/edit contents of the dialog for the dialog system. The dialog type corresponding to the system is limited to the question-and-answer type dialog, in order to avoid editing conflicts caused by editing by multiple users. The spoken dialog sharing service and FST generator generates spoken dialog content for the MMDAgent spoken dialog system toolkit, which includes a speech recognizer, a dialog control unit, a speech synthesizer, and a virtual agent. For dialog content creation, question-and-answer dialogs posted by users and FST templates are used. The proposed system was operated for more than a year in a student lounge at the Nagoya Institute of Technology, where users added more than 500 dialogs during the experiment. Images were also registered to 65% of the postings. The most posted category is related to “animation, video games, manga.” The system was subjected to open examination by tourist information staff who had no prior experience with spoken dialog systems. Based on their impressions of tourist use of the dialog system, they shortened the length of some of the system’s responses and added pauses to the longer responses to make them easier to understand

    Spoken Language Understanding strategies on the France Telecom 3000 voice agency corpus

    No full text
    Telephone services are now deployed that allow users to react to telephone prompts in spoken natural language. These systems have limited domain semantics and dialogue strategies which are represented by finite state diagrams. Most of these systems adopt a sequential approach where the Automatic Speech Recognition (ASR) process, the Spoken Language Understanding (SLU) process and the Dialogue Management (DM) are separate processes. In the framework of the France Telecom 3000 voice service, we propose in this paper to study several strategies in order to integrate more closely thes

    Classification automatique pour la compréhension de la parole (vers des systèmes semi-supervisés et auto-évolutifs)

    Get PDF
    La compréhension automatique de la parole est au confluent des deux grands domaines que sont la reconnaissance automatique de la parole et l'apprentissage automatique. Un des problèmes majeurs dans ce domaine est l'obtention d'un corpus de données conséquent afin d'obtenir des modèles statistiques performants. Les corpus de parole pour entraîner des modèles de compréhension nécessitent une intervention humaine importante, notamment dans les tâches de transcription et d'annotation sémantique. Leur coût de production est élevé et c'est la raison pour laquelle ils sont disponibles en quantité limitée.Cette thèse vise principalement à réduire ce besoin d'intervention humaine de deux façons : d'une part en réduisant la quantité de corpus annoté nécessaire à l'obtention d'un modèle grâce à des techniques d'apprentissage semi-supervisé (Self-Training, Co-Training et Active-Learning) ; et d'autre part en tirant parti des réponses de l'utilisateur du système pour améliorer le modèle de compréhension.Ce dernier point touche à un second problème rencontré par les systèmes de compréhension automatique de la parole et adressé par cette thèse : le besoin d'adapter régulièrement leurs modèles aux variations de comportement des utilisateurs ou aux modifications de l'offre de services du systèmeTwo wide research fields named Speech Recognition and Machine Learning meet with the Automatic Speech Language Understanding. One of the main problems in this domain is to obtain a sufficient corpus to train an efficient statistical model. Such speech corpora need a lot of human involvement to transcript and semantically annotate them. Their production cost is therefore quite high and they are difficultly available.This thesis mainly aims at reducing the need of human intervention in two ways: firstly, reducing the amount of corpus needed to build a model thanks to some semi-supervised learning methods (Self-Training, Co-Training and Active-Learning); And lastly, using the answers of the system end-user to improve the comprehension model.This last point addresses another problem related to automatic speech understanding systems: the need to adapt their models to the fluctuation of end-user habits or to the modification of the services list offered by the systemAVIGNON-Bib. numérique (840079901) / SudocSudocFranceF
    corecore