25,618 research outputs found
Robust Dependency Parsing of Spontaneous Japanese Speech and Its Evaluation
Spontaneously spoken Japanese includes a lot of grammatically\ud
ill-formed linguistic phenomena such as fillers,\ud
hesitations, inversions, and so on, which do not appear in\ud
written language. This paper proposes a method of robust\ud
dependency parsing using a large-scale spoken language\ud
corpus, and evaluates the availability and robustness\ud
of the method using spontaneously spoken dialogue\ud
sentences. By utilizing stochastic information about the\ud
appearance of ill-formed phenomena, the method can robustly\ud
parse spoken Japanese including fillers, inversions,\ud
or dependencies over utterance units. As a result of an experiment,\ud
the parsing accuracy provided 87.0%, and we\ud
confirmed that it is effective to utilize the location information\ud
of a bunsetsu, and the distance information between\ud
bunsetsus as stochastic information
Towards Understanding Spontaneous Speech: Word Accuracy vs. Concept Accuracy
In this paper we describe an approach to automatic evaluation of both the
speech recognition and understanding capabilities of a spoken dialogue system
for train time table information. We use word accuracy for recognition and
concept accuracy for understanding performance judgement. Both measures are
calculated by comparing these modules' output with a correct reference answer.
We report evaluation results for a spontaneous speech corpus with about 10000
utterances. We observed a nearly linear relationship between word accuracy and
concept accuracy.Comment: 4 pages PS, Latex2e source importing 2 eps figures, uses icslp.cls,
caption.sty, psfig.sty; to appear in the Proceedings of the Fourth
International Conference on Spoken Language Processing (ICSLP 96
Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech
We describe a statistical approach for modeling dialogue acts in
conversational speech, i.e., speech-act-like units such as Statement, Question,
Backchannel, Agreement, Disagreement, and Apology. Our model detects and
predicts dialogue acts based on lexical, collocational, and prosodic cues, as
well as on the discourse coherence of the dialogue act sequence. The dialogue
model is based on treating the discourse structure of a conversation as a
hidden Markov model and the individual dialogue acts as observations emanating
from the model states. Constraints on the likely sequence of dialogue acts are
modeled via a dialogue act n-gram. The statistical dialogue grammar is combined
with word n-grams, decision trees, and neural networks modeling the
idiosyncratic lexical and prosodic manifestations of each dialogue act. We
develop a probabilistic integration of speech recognition with dialogue
modeling, to improve both speech recognition and dialogue act classification
accuracy. Models are trained and evaluated using a large hand-labeled database
of 1,155 conversations from the Switchboard corpus of spontaneous
human-to-human telephone speech. We achieved good dialogue act labeling
accuracy (65% based on errorful, automatically recognized words and prosody,
and 71% based on word transcripts, compared to a chance baseline accuracy of
35% and human accuracy of 84%) and a small reduction in word recognition error.Comment: 35 pages, 5 figures. Changes in copy editing (note title spelling
changed
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems
This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response.Comment: SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creato
- …