5 research outputs found

    MUSE CSP: An Extension to the Constraint Satisfaction Problem

    Full text link
    This paper describes an extension to the constraint satisfaction problem (CSP) called MUSE CSP (MUltiply SEgmented Constraint Satisfaction Problem). This extension is especially useful for those problems which segment into multiple sets of partially shared variables. Such problems arise naturally in signal processing applications including computer vision, speech processing, and handwriting recognition. For these applications, it is often difficult to segment the data in only one way given the low-level information utilized by the segmentation algorithms. MUSE CSP can be used to compactly represent several similar instances of the constraint satisfaction problem. If multiple instances of a CSP have some common variables which have the same domains and constraints, then they can be combined into a single instance of a MUSE CSP, reducing the work required to apply the constraints. We introduce the concepts of MUSE node consistency, MUSE arc consistency, and MUSE path consistency. We then demonstrate how MUSE CSP can be used to compactly represent lexically ambiguous sentences and the multiple sentence hypotheses that are often generated by speech recognition algorithms so that grammar constraints can be used to provide parses for all syntactically correct sentences. Algorithms for MUSE arc and path consistency are provided. Finally, we discuss how to create a MUSE CSP from a set of CSPs which are labeled to indicate when the same variable is shared by more than a single CSP.Comment: See http://www.jair.org/ for any accompanying file

    Implementing a Hidden Markov Model with Duration Modeling on the MasPar MP-1

    Get PDF
    This paper describes the parallel implementation of a Hidden Markov Model (HMM) for spoken language recognition on the MasPar MP-1. A major drawback of using HMMs for speech recognition is the amount of processing time required to develop and test the model. By exploiting the massive parallelism of explicit duration HMMs, we can develop more complex models for real-time speech recognition. Implementational issues such as choice of data structures, method of communication, and utilization of parallel computation functions will be explored. The results of our experiments show that the parallelism in HMMs can be effectively exploited by the MP-1. Training that use to take more than a week can now be completed in about an hour. Once trained, the system can recognize the phones of a test utterance in a fraction of a second

    Parsing using the PARSEC Vector Processing Chip

    No full text
    This paper describes the implementation of the PARSEC 1 chip, a vector processing element (PE) for parsing languages. This chip has applications not only in natural language processing, but can also be applied to other constraint satisfaction problems. The PARSEC chip is based on a parsing algorithm which formerly ran in real time on a massively parallel machine [4]; however, the chip can achieve processing speeds fast enough for real-time language processing systems, while at the same time, having a price and form suitable for mass market applications. Key Words: artificial intelligence architectures and applications, VLSI A key component of any natural language interface is its parsing algorithm. Because some features of English (e.g., context) are clumsy or impossible to handle using existing parsers, we have extended and implemented a parsing algorithm based on a new, flexible grammatical formalism, called Constraint Dependency Grammar (CDG), introduced by Maruyama [5, 6, 7]. Th..

    Integrating Language Models with Speech Recognition

    No full text
    The question of how to integrate language models with speech recognition systems is becoming more important as speech recognition technology matures. For the purposes of this paper, we have classified the level of integration of current and past approaches into three categories: tightly-coupled, loosely-coupled, or semicoupled systems. We then argue that loose coupling is more appropriate given the current state of the art and given that it allows one to measure more precisely which components of the language model are most important. We will detail how the speech component in our approach interacts with the language model and discuss why we chose our language model. 1 Introduction State of the art speech recognition systems achieve high recognition accuracies only on tasks that have low perplexities. The perplexity of a task is, roughly speaking, the average number of choices at any decision point. The perplexity of a task is at a minimum when the true language model is known and co..
    corecore