5 research outputs found

    GenLeNa: Sistema para la construcci贸n de Aplicaciones de Generaci贸n de Lenguaje Natural

    Get PDF
    In this article the proposal is made for the division of the process of construction of natural language generation (NLG) systems into two stages: content planning (CP), which is dependent on the mastery of the application to be developed, and document structuring (DS). This division allows people who are not expert in NLG to develop natural language generation systems, concentrating on building abstract representations of the information to be communicated (called messages). Specific architecture for the DS stage is also presented. This enables NLG researchers to work ortogonally on specific techniques and methodologies for the conversion of messages into text which is grammatically and syntactically correct.En este art铆culo se propone la divisi贸n del proceso de construcci贸n de sistemas de Generaci贸n de Lenguajes Natural (GLN) en dos etapas: planificaci贸n del contenido (EPC), que es dependiente del dominio de la aplicaci贸n a desarrollar, y estructuraci贸n del documento (EED). Esta divisi贸n permite que personas no expertas en GLN puedan desarrollar sistemas de generaci贸n de lenguajes natural enfoc谩ndose en construir representaciones abstractas de la informaci贸n que se desea comunicar (denominadas mensajes). Adicionalmente se presenta una arquitectura espec铆fica para la etapa EED que permite a investigadores en GLN trabajar ortogonalmente en t茅cnicas y metodolog铆as espec铆ficas para la transformaci贸n de los mensajes en texto gramatical y sint谩cticamente correcto

    A framework for multi-modal input in a pervasive computing environment

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (leaves 51-53).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.In this thesis, we propose a framework that uses multiple-domains and multi-modal techniques to disambiguate a variety of natural human input modes. This system is based on the input needs of pervasive computing users. The work extends the Galaxy architecture developed by the Spoken Language Systems group at MIT. Just as speech recognition disambiguates an input wave form by using a grammar to find the best matching phrase, we use the same mechanism to disambiguate other input forms, T9 in particular. A skeleton version of the framework was implemented to show this framework is possible and to explore some of the issues that might arise. The system currently works for both T9 and Speech modes. The framework also includes potential for any other type of input for which a recognizer can be built such as graffiti input.by Shalini Agarwal.M.Eng

    Towards a unified framework for sub-lexical and supra-lexical linguistic modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 171-178).Conversational interfaces have received much attention as a promising natural communication channel between humans and computers. A typical conversational interface consists of three major systems: speech understanding, dialog management and spoken language generation. In such a conversational interface, speech recognition as the front-end of speech understanding remains to be one of the fundamental challenges for establishing robust and effective human/computer communications. On the one hand, the speech recognition component in a conversational interface lives in a rich system environment. Diverse sources of knowledge are available and can potentially be beneficial to its robustness and accuracy. For example, the natural language understanding component can provide linguistic knowledge in syntax and semantics that helps constrain the recognition search space. On the other hand, the speech recognition component also faces the challenge of spontaneous speech, and it is important to address the casualness of speech using the knowledge sources available. For example, sub-lexical linguistic information would be very useful in providing linguistic support for previously unseen words, and dynamic reliability modeling may help improve recognition robustness for poorly articulated speech. In this thesis, we mainly focused on the integration of knowledge sources within the speech understanding system of a conversational interface. More specifically, we studied the formalization and integration of hierarchical linguistic knowledge at both the sub-lexical level and the supra-lexical level, and proposed a unified framework for integrating hierarchical linguistic knowledge in speech recognition using layered finite-state transducers (FSTs).(cont.) Within the proposed framework, we developed context-dependent hierarchical linguistic models at both sub-lexical and supra-lexical levels. FSTs were designed and constructed to encode both structure and probability constraints provided by the hierarchical linguistic models. We also studied empirically the feasibility and effectiveness of integrating hierarchical linguistic knowledge into speech recognition using the proposed framework. We found that, at the sub-lexical level, hierarchical linguistic modeling is effective in providing generic sub-word structure and probability constraints. Since such constraints are not restricted to a fixed system vocabulary, they can help the recognizer correctly identify previously unseen words. Together with the unknown word support from natural language understanding, a conversational interface would be able to deal with unknown words better, and can possibly incorporate them into the active recognition vocabulary on-the-fly. At the supra-lexical level, experimental results showed that the shallow parsing model built within the proposed layered FST framework with top-level n-gram probabilities and phrase-level context-dependent probabilities was able to reduce recognition errors, compared to a class n-gram model of the same order. However, we also found that its application can be limited by the complexity of the composed FSTs. This suggests that, with a much more complex grammar at the supra-lexical level, a proper tradeoff between tight knowledge integration and system complexity becomes more important ...by Xiaolong Mou.Ph.D

    Corpus-based unit selection for natural-sounding speech synthesis

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 179-196).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Speech synthesis is an automatic encoding process carried out by machine through which symbols conveying linguistic information are converted into an acoustic waveform. In the past decade or so, a recent trend toward a non-parametric, corpus-based approach has focused on using real human speech as source material for producing novel natural-sounding speech. This work proposes a communication-theoretic formulation in which unit selection is a noisy channel through which an input sequence of symbols passes and an output sequence, possibly corrupted due to the coverage limits of the corpus, emerges. The penalty of approximation is quantified by substitution and concatenation costs which grade what unit contexts are interchangeable and where concatenations are not perceivable. These costs are semi-automatically derived from data and are found to agree with acoustic-phonetic knowledge. The implementation is based on a finite-state transducer (FST) representation that has been successfully used in speech and language processing applications including speech recognition. A proposed constraint kernel topology connects all units in the corpus with associated substitution and concatenation costs and enables an efficient Viterbi search that operates with low latency and scales to large corpora. An A* search can be applied in a second, rescoring pass to incorporate finer acoustic modelling. Extensions to this FST-based search include hierarchical and paralinguistic modelling. The search can also be used in an iterative feedback loop to record new utterances to enhance corpus coverage. This speech synthesis framework has been deployed across various domains and languages in many voices, a testament to its flexibility and rapid prototyping capability.(cont.) Experimental subjects completing tasks in a given air travel planning scenario by interacting in real time with a spoken dialogue system over the telephone have found the system "easiest to understand" out of eight competing systems. In more detailed listening evaluations, subjective opinions garnered from human participants are found to be correlated with objective measures calculable by machine.by Jon Rong-Wei Yi.Ph.D
    corecore