78,910 research outputs found

    Defenestration: deconstructing the frame-in relation in Ungarinyin

    Get PDF
    The Australian Aboriginal language Ungarinyin (Worrorran) has one single complex-clause construction for expressing reported speech (‘say’), that can also signal reported thought (‘think’) and attribute intentions (‘want’). By demonstrating which formal and functional distinctions are essential to the interpretation of this Ungarinyin construction, the present paper aims to contribute to understanding the exact nature of the syntactic relation involved in reported speech constructions. Following the account of McGregor (1994; 1997; 2008), I analyse the clausal syntax of reported speech constructions as a dedicated syntactic relation, separate from more familiar clausal relations such as coordination and subordination. I call this relation the ‘frame-in’ construction. Subsequently, I compare the conventionalised reported speech construction in Ungarinyin to a variety of more loosely integrated non-conventionalised or semi-conventionalised strategies for expressing speech and thought attribution in the language. Collectively I refer to these strategies as examples of ‘defenestration’, constructions without the typical marking of the syntactic frame-in relation, while expressing the meaning associated with a regular frame-in construction. Instances of defenestration differ from syntactic frame-in in that they express the meaning of a frame-in construction through transparent compositional means. I argue that types of defenestration show remarkable regularities in Ungarinyin, and, tentatively, cross-linguistically, which has consequences for the analysis of indexicality and iconicity in syntax and presents a new context for analysing the syntax of reported speech constructions in relation to multimodal features, particulary for the category of free (in)direct speech and ‘zero quotatives’

    Learning Fault-tolerant Speech Parsing with SCREEN

    Get PDF
    This paper describes a new approach and a system SCREEN for fault-tolerant speech parsing. SCREEEN stands for Symbolic Connectionist Robust EnterprisE for Natural language. Speech parsing describes the syntactic and semantic analysis of spontaneous spoken language. The general approach is based on incremental immediate flat analysis, learning of syntactic and semantic speech parsing, parallel integration of current hypotheses, and the consideration of various forms of speech related errors. The goal for this approach is to explore the parallel interactions between various knowledge sources for learning incremental fault-tolerant speech parsing. This approach is examined in a system SCREEN using various hybrid connectionist techniques. Hybrid connectionist techniques are examined because of their promising properties of inherent fault tolerance, learning, gradedness and parallel constraint integration. The input for SCREEN is hypotheses about recognized words of a spoken utterance potentially analyzed by a speech system, the output is hypotheses about the flat syntactic and semantic analysis of the utterance. In this paper we focus on the general approach, the overall architecture, and examples for learning flat syntactic speech parsing. Different from most other speech language architectures SCREEN emphasizes an interactive rather than an autonomous position, learning rather than encoding, flat analysis rather than in-depth analysis, and fault-tolerant processing of phonetic, syntactic and semantic knowledge.Comment: 6 pages, postscript, compressed, uuencoded to appear in Proceedings of AAAI 9

    SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks

    Get PDF
    In this paper, we describe a so-called screening approach for learning robust processing of spontaneously spoken language. A screening approach is a flat analysis which uses shallow sequences of category representations for analyzing an utterance at various syntactic, semantic and dialog levels. Rather than using a deeply structured symbolic analysis, we use a flat connectionist analysis. This screening approach aims at supporting speech and language processing by using (1) data-driven learning and (2) robustness of connectionist networks. In order to test this approach, we have developed the SCREEN system which is based on this new robust, learned and flat analysis. In this paper, we focus on a detailed description of SCREEN's architecture, the flat syntactic and semantic analysis, the interaction with a speech recognizer, and a detailed evaluation analysis of the robustness under the influence of noisy or incomplete input. The main result of this paper is that flat representations allow more robust processing of spontaneous spoken language than deeply structured representations. In particular, we show how the fault-tolerance and learning capability of connectionist networks can support a flat analysis for providing more robust spoken-language processing within an overall hybrid symbolic/connectionist framework.Comment: 51 pages, Postscript. To be published in Journal of Artificial Intelligence Research 6(1), 199

    New Technique to Enhance the Performance of Spoken Dialogue Systems by Means of Implicit Recovery of ASR Errors

    Get PDF
    This paper proposes a new technique to implicitly correct some ASR errors made by spoken dialogue systems, which is implemented at two levels: statistical and linguistic. The goal of the former level is to employ for the correction knowledge extracted from the analysis of a training corpus comprised of utterances and their corresponding ASR results. The outcome of the analysis is a set of syntactic-semantic models and a set of lexical models, which are optimally selected during the correction. The goal of the correction at the linguistic level is to repair errors not detected during the statistical level which affects the semantics of the sentences. Experiments carried out with a previouslydeveloped spoken dialogue system for the fast food domain indicate that the technique allows enhancing word accuracy, spoken language understanding and task completion by 8.5%, 16.54% and 44.17% absolute, respectively.Ministerio de Ciencia y TecnologĂ­a TIN2007-64718 HAD

    Saisir l’insaisissable? Les mesures de longueur d’énoncĂ©s en linguistique appliquĂ©e

    Get PDF
    The utterance is a widely used linguistic unit. It seems, however, to escape every attempt to define it unequivocally. There seems to be a consensus in applied linguistics that a fuzzy combination of syntactic, semantic and prosodic clues are necessary to identify the boundaries of an utterance. Drawing upon our own research in advanced French interlanguage, we present an analysis of the measure “Mean Length of Utterance” (MLU), widely used in studies on first language acquisition and speech disorders and, to a lesser extent, in second language acquisition. MLU is shown to be methodologically unreliable for adult speech. We argue that other measures of utterance length, like the MLU3, are sounder and can help to gain a better understanding of synchronic variation in speech

    GEMINI: A Natural Language System for Spoken-Language Understanding

    Full text link
    Gemini is a natural language understanding system developed for spoken language applications. The paper describes the architecture of Gemini, paying particular attention to resolving the tension between robustness and overgeneration. Gemini features a broad-coverage unification-based grammar of English, fully interleaved syntactic and semantic processing in an all-paths, bottom-up parser, and an utterance-level parser to find interpretations of sentences that might not be analyzable as complete sentences. Gemini also includes novel components for recognizing and correcting grammatical disfluencies, and for doing parse preferences. This paper presents a component-by-component view of Gemini, providing detailed relevant measurements of size, efficiency, and performance.Comment: 8 pages, postscrip

    From Monologue to Dialogue: Natural Language Generation in OVIS

    Get PDF
    This paper describes how a language generation system that was originally designed for monologue generation, has been adapted for use in the OVIS spoken dialogue system. To meet the requirement that in a dialogue, the system's utterances should make up a single, coherent dialogue turn, several modifications had to be made to the system. The paper also discusses the influence of dialogue context on information status, and its consequences for the generation of referring expressions and accentuation

    Pauses and the temporal structure of speech

    Get PDF
    Natural-sounding speech synthesis requires close control over the temporal structure of the speech flow. This includes a full predictive scheme for the durational structure and in particuliar the prolongation of final syllables of lexemes as well as for the pausal structure in the utterance. In this chapter, a description of the temporal structure and the summary of the numerous factors that modify it are presented. In the second part, predictive schemes for the temporal structure of speech ("performance structures") are introduced, and their potential for characterising the overall prosodic structure of speech is demonstrated

    Syntactic Topic Models

    Full text link
    The syntactic topic model (STM) is a Bayesian nonparametric model of language that discovers latent distributions of words (topics) that are both semantically and syntactically coherent. The STM models dependency parsed corpora where sentences are grouped into documents. It assumes that each word is drawn from a latent topic chosen by combining document-level features and the local syntactic context. Each document has a distribution over latent topics, as in topic models, which provides the semantic consistency. Each element in the dependency parse tree also has a distribution over the topics of its children, as in latent-state syntax models, which provides the syntactic consistency. These distributions are convolved so that the topic of each word is likely under both its document and syntactic context. We derive a fast posterior inference algorithm based on variational methods. We report qualitative and quantitative studies on both synthetic data and hand-parsed documents. We show that the STM is a more predictive model of language than current models based only on syntax or only on topics

    Robust Processing of Natural Language

    Full text link
    Previous approaches to robustness in natural language processing usually treat deviant input by relaxing grammatical constraints whenever a successful analysis cannot be provided by ``normal'' means. This schema implies, that error detection always comes prior to error handling, a behaviour which hardly can compete with its human model, where many erroneous situations are treated without even noticing them. The paper analyses the necessary preconditions for achieving a higher degree of robustness in natural language processing and suggests a quite different approach based on a procedure for structural disambiguation. It not only offers the possibility to cope with robustness issues in a more natural way but eventually might be suited to accommodate quite different aspects of robust behaviour within a single framework.Comment: 16 pages, LaTeX, uses pstricks.sty, pstricks.tex, pstricks.pro, pst-node.sty, pst-node.tex, pst-node.pro. To appear in: Proc. KI-95, 19th German Conference on Artificial Intelligence, Bielefeld (Germany), Lecture Notes in Computer Science, Springer 199
    • 

    corecore