8 research outputs found

    PARADISE: A Framework for Evaluating Spoken Dialogue Agents

    Full text link
    This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.Comment: 10 pages, uses aclap, psfig, lingmacros, time

    From Novice To Expert: The Effect Of Tutorials On User Expertise With Spoken Dialogue Systems

    No full text
    One of the challenges for the current state of the art in spoken dialogue systems is how to make the limitations of the system apparent to users. These limitations have many sources: limited vocabulary, limited grammar, or limitations in the application domain. This study explored the use of a 4-minute tutorial session to acquaint novice users with the features of a spoken dialogue system for accessing email. On a set of three scenariobased tasks, novice users who had the tutorial had task completion times and user satisfaction ratings that were comparable to those of expert users of the system. Novices who did not experience the tutorial had significantly longer task completion times on the initial task, but similar completion times to the tutorial group on the final task. User satisfaction ratings of the no-tutorial group were consistently lower than the ratings of the tutorial and the expert groups. Evaluation using the PARADISE [7] framework indicated that perceived task completion..
    corecore