2,835 research outputs found

    Bringing together commercial and academic perspectives for the development of intelligent AmI interfaces

    Get PDF
    The users of Ambient Intelligence systems expect an intelligent behavior from their environment, receiving adapted and easily accessible services and functionality. This can only be possible if the communication between the user and the system is carried out through an interface that is simple (i.e. which does not have a steep learning curve), fluid (i.e. the communication takes place rapidly and effectively), and robust (i.e. the system understands the user correctly). Natural language interfaces such as dialog systems combine the previous three requisites, as they are based on a spoken conversation between the user and the system that resembles human communication. The current industrial development of commercial dialog systems deploys robust interfaces in strictly defined application domains. However, commercial systems have not yet adopted the new perspective proposed in the academic settings, which would allow straightforward adaptation of these interfaces to various application domains. This would be highly beneficial for their use in AmI settings as the same interface could be used in varying environments. In this paper, we propose a new approach to bridge the gap between the academic and industrial perspectives in order to develop dialog systems using an academic paradigm while employing the industrial standards, which makes it possible to obtain new generation interfaces without the need for changing the already existing commercial infrastructures. Our proposal has been evaluated with the successful development of a real dialog system that follows our proposed approach to manage dialog and generates code compliant with the industry-wide standard VoiceXML.Research funded by projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485), and DPS2008- 07029-C02-02.Publicad

    Acquiring and Maintaining Knowledge by Natural Multimodal Dialog

    Get PDF

    Adaptable dialogue architecture and runtime engine (AdaRTE): A framework for rapid prototyping of health dialog systems

    Get PDF
    International audienceSpoken dialog systems have been increasingly employed to provide ubiquitous access via telephone to information and services for the non-Internet-connected public. They have been successfully applied in the health care context; however, speech technology requires a considerable development investment. The advent of VoiceXML reduced the proliferation of incompatible dialog formalisms, at the expense of adding even more complexity. This paper introduces a novel architecture for dialogue representation and interpretation, AdaRTE, which allows developers to lay out dialog interactions through a high-level formalism, offering both declarative and procedural features. AdaRTE's aim is to provide a ground for deploying complex and adaptable dialogs whilst allowing experimentation and incremental adoption of innovative speech technologies. It enhances augmented transition networks with dynamic behavior, and drives multiple back-end realizers, including VoiceXML. It has been especially targeted to the health care context, because of the great scale and the need for reducing the barrier to a widespread adoption of dialog systems

    Socially aware conversational agents

    Get PDF

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Structured Dialogue State Management for Task-Oriented Dialogue Systems

    Get PDF
    Human-machine conversational agents have developed at a rapid pace in recent years, bolstered through the application of advanced technologies such as deep learning. Today, dialogue systems are useful in assisting users in various activities, especially task-oriented dialogue systems in specific dialogue domains. However, they continue to be limited in many ways. Arguably the biggest challenge lies in the complexity of natural language and interpersonal communication, and the lack of human context and knowledge available to these systems. This leads to the question of whether dialogue systems, and in particular task-oriented dialogue systems, can be enhanced to leverage various language properties. This work focuses on the semantic structural properties of language in task-oriented dialogue systems. These structural properties are manifest by variable dependencies in dialogue domains; and the study of and accounting for these variables and their interdependencies is the main objective of this research. Contemporary task-oriented dialogue systems are typically developed with a multiple component architecture, where each component is responsible for a specific process in the conversational interaction. It is commonly accepted that the ability to understand user input in a conversational context, a responsibility generally assigned to the dialogue state tracking component, contributes a huge part to the overall performance of dialogue systems. The output of the dialogue state tracking component, so-called dialogue states, are a representation of the aspects of a dialogue relevant to the completion of a task up to that point, and should also capture the task structural properties of natural language. Here, in a dialogue context dialogue state variables are expressed through dialogue slots and slot values, hence the dialogue state variable dependencies are expressed as the dependencies between dialogue slots and their values. Incorporating slot dependencies in the dialogue state tracking process is herein hypothesised to enhance the accuracy of postulated dialogue states, and subsequently potentially improve the performance of task-oriented dialogue systems. Given this overall goal and approach to the improvement of dialogue systems, the work in this dissertation can be broken down into two related contributions: (i) a study of structural properties in dialogue states; and (ii) the investigation of novel modelling approaches to capture slot dependencies in dialogue domains. The analysis of language\u27s structural properties was conducted with a corpus-based study to investigate whether variable dependencies, i.e., slot dependencies when using dialogue system terminology, exist in dialogue domains, and if yes, to what extent do these dependencies affect the dialogue state tracking process. A number of public dialogue corpora were chosen for analysis with a collection of statistical methods being applied to their analysis. Deep learning architectures have been shown in various works to be an effective method to model conversations and different types of machine learning challenges. In this research, in order to account for slot dependencies, a number of deep learning-based models were experimented with for the dialogue state tracking task. In particular, a multi-task learning system was developed to study the leveraging of common features and shared knowledge in the training of dialogue state tracking subtasks such as tracking different slots, hence investigating the associations between these slots. Beyond that, a structured prediction method, based on energy-based learning, was also applied to account for explicit dialogue slot dependencies. The study results show promising directions for solving the dialogue state tracking challenge for task-oriented dialogue systems. By accounting for slot dependencies in dialogue domains, dialogue states were produced more accurately when benchmarked against comparative modelling methods that do not take advantage of the same principle. Furthermore, the structured prediction method is applicable to various state-of-the-art modelling approaches for further study. In the long term, the study of dialogue state slot dependencies can potentially be expanded to a wider range of conversational aspects such as personality, preferences, and modalities, as well as user intents

    Enabling robust and fluid spoken dialogue with cognitively impaired users

    Get PDF
    Yaghoubzadeh R, Kopp S. Enabling robust and fluid spoken dialogue with cognitively impaired users. In: Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. Saarbrücken, Germany: Association for Computational Linguistics; 2017: 273--283.We present the flexdiam dialogue management architecture, which was developed in a series of projects dedicated to tailoring spoken interaction to the needs of users with cognitive impairments in an everyday assistive domain, using a multimodal front-end. This hybrid DM architecture affords incremental processing of uncertain input, a flexible, mixed-initiative information grounding process that can be adapted to users' cognitive capacities and interactive idiosyncrasies, and generic mechanisms that foster transitions in the joint discourse state that are understandable and controllable by those users, in order to effect a robust interaction for users with varying capacities. [Link to poster and supplemental materials](https://purl.org/net/ramin/sigdial2017

    Characterizing and recognizing spoken corrections in human-computer dialog

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 103-106).Miscommunication in human-computer spoken language systems is unavoidable. Recognition failures on the part of the system necessitate frequent correction attempts by the user. Unfortunately and counterintuitively, users' attempts to speak more clearly in the face of recognition errors actually lead to decreased recognition accuracy. The difficulty of correcting these errors, in turn, leads to user frustration and poor assessments of system quality. Most current approaches to identifying corrections rely on detecting violations of task or belief models that are ineffective where such constraints are weak and recognition results inaccurate or unavailable. In contrast, the approach pursued in this thesis, in contrast, uses the acoustic contrasts between original inputs and repeat corrections to identify corrections in a more content- and context-independent fashion. This thesis quantifies and builds upon the observation that suprasegmental features, such as duration, pause, and pitch, play a crucial role in distinguishing corrections from other forms of input to spoken language systems. These features can also be used to identify spoken corrections and explain reductions in recognition accuracy for these utterances. By providing a detailed characterization of acoustic-prosodic changes in corrections relative to original inputs in a voice-only system, this thesis contributes to natural language processing and spoken language understanding. We present a treatment of systematic acoustic variability in speech recognizer input as a source of new information, to interpret the speaker's corrective intent, rather than simply as noise or user error. We demonstrate the application of a machine-learning technique, decision trees, for identifying spoken corrections and achieve accuracy rates close to human levels of performance for corrections of misrecognition errors, using acoustic-prosodic information. This process is simple and local and depends neither on perfect transcription of the recognition string nor complex reasoning based on the full conversation. We further extend the conventional analysis of speaking styles beyond a 'read' versus 'conversational' contrast to extreme clear speech, describing divergence from phonological and durational models for words in this style.by Gina-Anne Levow.Ph.D

    Principles of Human Computer Interaction Design: HCI Design

    Get PDF
    This book covers the design, evaluation and development process for interactive human computer interfaces including user interface design principles, task analysis, interface design methods, auditory interfaces, haptics, user interface evaluation, usability testing prototyping, issues in interface construction, interface evaluation, World Wide Web and mobile device interface issues.The book is ideal for the student that wants to learn how to use prototyping tools as part of the interface design and how to evaluate an interface and its interaction quality by using usability testing techniques
    corecore