1 research outputs found

    Simplifying Language Through Error-Correcting Decoding

    No full text
    In many speech processing tasks, most of the sentences generally convey rather simple meanings. In these tasks, the "wordrecognition " problem is much more difficult than the underlying "speech understanding" problem would be. Accordingly we try to develop an adequate framework to focus on a properly defined "understanding " of the sentences rather than "recognizing" the (possibly) superfluous words. This can be seen as closely related with Spontaneous Language Understanding and Disfluence Modeling. In our approach, these problems are placed under the framework of Error-Correcting Decoding (ECD). A complex task is modeled in terms of a basic stochastic grammar, G, and an Error Model, E (taking insertions, substitutions and deletions into account). G should account for the basic (syntactic) structures underlying this task which would convey the semantics. E should account for general vocabulary variations, speech disfluencies, word disappearance, superfluous words, and so on. Each "complex" user sentence, x, will thus be considered as a corrupted version (according to E) of some "simple" sentence y of L#G#. Recognition can then be seen as an ECD process: given x, find a sentence y of L#G# with maximum posterior probability. We introduce fast ECD techniques and adequate procedures for simultaneously training G and E and apply these ideas to a simple task with results showing the potential of the proposed approach
    corecore