27,201 research outputs found

    Staging Transformations for Multimodal Web Interaction Management

    Get PDF
    Multimodal interfaces are becoming increasingly ubiquitous with the advent of mobile devices, accessibility considerations, and novel software technologies that combine diverse interaction media. In addition to improving access and delivery capabilities, such interfaces enable flexible and personalized dialogs with websites, much like a conversation between humans. In this paper, we present a software framework for multimodal web interaction management that supports mixed-initiative dialogs between users and websites. A mixed-initiative dialog is one where the user and the website take turns changing the flow of interaction. The framework supports the functional specification and realization of such dialogs using staging transformations -- a theory for representing and reasoning about dialogs based on partial input. It supports multiple interaction interfaces, and offers sessioning, caching, and co-ordination functions through the use of an interaction manager. Two case studies are presented to illustrate the promise of this approach.Comment: Describes framework and software architecture for multimodal web interaction managemen

    Keyboardless Visual Programming Using Voice, Handwriting, and Gesture

    Get PDF
    Visual programming languages have facilitated the application development process, improving our ability to express programs, as well as our ability to view, edit and interact with them. Yet even in programming environments, productivity is restricted by the primary input sources: the mouse and the keyboard. As an alternative, we investigate a program development interface which responds to the most natural human communication technologies: voice, handwriting and gesture. Speech- and pen-based systems have yet to find broad acceptance in everyday life because they are insufficiently advantageous to overcome problems with reliability. However, we believe that a visual programming environment with a multimodal user interface properly constrained so as not to exceed the limits of the current technology has the potential to increase programming productivity for not only those people who are manually or visually impaired, but for the general population as well. In this paper we report on such a system

    A Multi-channel Application Framework for Customer Care Service Using Best-First Search Technique

    Get PDF
    It has become imperative to find a solution to the dissatisfaction in response by mobile service providers when interacting with their customer care centres. Problems faced with Human to Human Interaction (H2H) between customer care centres and their customers include delayed response time, inconsistent solutions to questions or enquires and lack of dedicated access channels for interaction with customer care centres in some cases. This paper presents a framework and development techniques for a multi-channel application providing Human to System (H2S) interaction for customer care centre of a mobile telecommunication provider. The proposed solution is called Interactive Customer Service Agent (ICSA). Based on single-authoring, it will provide three media of interaction with the customer care centre of a mobile telecommunication operator: voice, phone and web browsing. A mathematical search technique called Best-First Search to generate accurate results in a search environmen

    Integrated Framework for Interaction and Annotation of Multimodal Data

    Get PDF
    Ahmed, Afroza. MS. The University of Memphis. August 2010. Integrated Framework for Interaction and Annotation of Multimodal Data. Major Professor: Mohammed Yeasin, Ph.D. This thesis aims to develop an integrated framework and intuitive user-interface to interact, annotate, and analyze multimodal data (i.e., video, image, audio, and text data). The proposed framework has three layers: (i) interaction, (ii) annotation, and (iii) analysis or modeling. These three layers are seamlessly wrapped together using an user-friendly interface designed based on proven principles from the industry practices. The key objective is to facilitate the interaction with multimodal data at various levels of granularities. In particular, the proposed framework allows interaction with the multimodal data in three levels: (i) raw level, (ii) feature level, and (iii) semantic level. The main function of the proposed framework is to provide an efficient way to annotate the raw multimodal data to create proper ground truth meta data. The annotated data is used for visual analysis, co-analysis, and modeling of underlying concepts, such as dialog acts, continuous gestures, and spontaneous emotions. The key challenge is to integrate codes(computer programs) written using different programming languages and platforms, displaying the results, and multimodal data in one platform. This fully integrated tool achieved the stated goals and objective and is a valuable addition to the list of very few existing tools that are useful for interaction, annotation, and analysis of multimodal data
    corecore