7,810 research outputs found

    A Personalized System for Conversational Recommendations

    Full text link
    Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system -- the Adaptive Place Advisor -- that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system

    Robust Dialog Management Through A Context-centric Architecture

    Get PDF
    This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users

    CSE: U: Mixed-initiative Personal Assistant Agents

    Get PDF
    Specification and implementation of flexible human-computer dialogs is challenging because of the complexity involved in rendering the dialog responsive to a vast number of varied paths through which users might desire to complete the dialog. To address this problem, we developed a toolkit for modeling and implementing task-based, mixed-initiative dialogs based on metaphors from lambda calculus. Our toolkit can automatically operationalize a dialog that involves multiple prompts and/or sub-dialogs, given a high-level dialog specification of it. The use of natural language with the resulting dialogs makes the flexibility in communicating user utterances commensurate with that in dialog completion paths—an aspect missing from commercial assistants like Siri. Our results demonstrate that the dialogs authored with our toolkit support the end user’s completion of a human-computer dialog in a manner that is most natural to them—in a mixed-initiative fashion—that resembles human-human interaction

    A Framework for the Measurement of Simulated Behavior Performance

    Get PDF
    Recent development in video games, simulation, training, and robotics has seen a push for greater visual and behavioral realism. As the reliance on high fidelity models in the education, training, and simulation communities to provide information used for strategic and tactical decisions rises, the importance of accuracy and credibility of simulated behavior increases. Credibility is typically established through verification and validation techniques. Increased interest exists in bringing behavior realism to the same level as the visual. Thus far validation process for behavioral models is unclear. With real world behavior a major goal, this research investigates the validation problem and provides a process for quantifying behavioral correctness. We design a representation of behavior based on kinematic features capturable from persistent sensors and develop a domain independent classification framework for the measuring of behavior replication. We demonstrate functionality through correct behavior comparison and evaluation of sample simulated behaviors

    A framework to develop adaptive multimodal dialog systems for Android-based mobile devices

    Get PDF
    Proceedings of: 9th International Conference (HAIS 2014), Salamanca, Spain, June 11-13, 2014Mobile devices programming has emerged as a new trend in software development. The main developers of operating systems for such devices have provided APIs for developers to implement their own applications, including different solutions for developing voice control. Android, the most popular alternative among developers, offers libraries to build interfaces including different resources for graphical layouts as well as speech recognition and text-to-speech synthesis. Despite the usefulness of such classes, there are no strategies defined for multimodal interface development for Android systems, and developers create ad-hoc solutions that make apps costly to implement and difficult to compare and maintain. In this paper we propose a framework to facilitate the software engineering life cycle for multimodal interfaces in Android. Our proposal integrates the facilities of the Android API in a modular architecture that emphasizes interaction management and context-awareness to build sophisticated, robust and maintainable applications.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485)

    A personalized system for conversational recommendations

    Get PDF
    technical reportIncreased computing power and theWeb have made information widely accessible. In turn, this has encouraged the development of recommendation systems that help users find items of interest, such as books or restaurants. Such systems are more useful when they personalize themselves to each user?s preferences, thus making the recommendation process more efficient and effective. In this paper, we present a new type of recommendation system that carries out a personalized dialogue with the user. This system ? the Adaptive Place Advisor ? treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. The system incorporates a user model that contains item, attribute, and value preferences, which it updates during each conversation and maintains across sessions. The Place Advisor uses both the conversational context and the user model to retrieve candidate items from a case base. The system then continues to ask questions, using personalized heuristics to select which attribute to ask about next, presenting complete items to the user only when a few remain. We report experimental results demonstrating the effectiveness of user modeling in reducing the time and number of interactions required to find a satisfactory item
    • …
    corecore