428,118 research outputs found

    Domain independent goal recognition

    Get PDF
    Goal recognition is generally considered to follow plan recognition. The plan recognition problem is typically deïŹned to be that of identifying which plan in a given library of plans is being executed, given a sequence of observed actions. Once a plan has been identiïŹed, the goal of the plan can be assumed to follow. In this work, we address the problem of goal recognition directly, without assuming a plan library. Instead, we start with a domain description, just as is used for plan construction, and a sequence of action observations. The task, then, is to identify which possible goal state is the ultimate destination of the trajectory being observed. We present a formalisation of the problem and motivate its interest, before describing some simplifying assumptions we have made to arrive at a ïŹrst implementation of a goal recognition system, AUTOGRAPH. We discuss the techniques employed in AUTOGRAPH to arrive at a tractable approximation of the goal recognition problem and show results for the system we have implemented

    Design of amino acid sequences to fold into C_alpha-model proteins

    Full text link
    In order to extend the results obtained with minimal lattice models to more realistic systems, we study a model where proteins are described as a chain of 20 kinds of structureless amino acids moving in a continuum space and interacting through a contact potential controlled by a 20x20 quenched random matrix. The goal of the present work is to design and characterize amino acid sequences folding to the SH3 conformation, a 60-residues recognition domain common to many regulatory proteins. We show that a number of sequences can fold, starting from a random conformation, to within a distance root mean square deviation (dRMSD) of 2.6A from the native state. Good folders are those sequences displaying in the native conformation an energy lower than a sequence--independent threshold energy

    Robustness issues in a data-driven spoken language understanding system

    Get PDF
    Robustness is a key requirement in spoken language understanding (SLU) systems. Human speech is often ungrammatical and ill-formed, and there will frequently be a mismatch between training and test data. This paper discusses robustness and adaptation issues in a statistically-based SLU system which is entirely data-driven. To test robustness, the system has been tested on data from the Air Travel Information Service (ATIS) domain which has been artificially corrupted with varying levels of additive noise. Although the speech recognition performance degraded steadily, the system did not fail catastrophically. Indeed, the rate at which the end-to-end performance of the complete system degraded was significantly slower than that of the actual recognition component. In a second set of experiments, the ability to rapidly adapt the core understanding component of the system to a different application within the same broad domain has been tested. Using only a small amount of training data, experiments have shown that a semantic parser based on the Hidden Vector State (HVS) model originally trained on the ATIS corpus can be straightforwardly adapted to the somewhat different DARPA Communicator task using standard adaptation algorithms. The paper concludes by suggesting that the results presented provide initial support to the claim that an SLU system which is statistically-based and trained entirely from data is intrinsically robust and can be readily adapted to new applications

    The Application of Blind Source Separation to Feature Decorrelation and Normalizations

    Get PDF
    We apply a Blind Source Separation BSS algorithm to the decorrelation of Mel-warped cepstra. The observed cepstra are modeled as a convolutive mixture of independent source cepstra. The algorithm aims to minimize a cross-spectral correlation at different lags to reconstruct the source cepstra. Results show that using "independent" cepstra as features leads to a reduction in the WER.Finally, we present three different enhancements to the BSS algorithm. We also present some results of these deviations of the original algorithm
    • 

    corecore