1,490 research outputs found

    Dynamically generated multi-modal application interfaces

    Get PDF
    This work introduces a new UIMS (User Interface Management System), which aims to solve numerous problems in the field of user-interface development arising from hard-coded use of user interface toolkits. The presented solution is a concrete system architecture based on the abstract ARCH model consisting of an interface abstraction-layer, a dialog definition language called GIML (Generalized Interface Markup Language) and pluggable interface rendering modules. These components form an interface toolkit called GITK (Generalized Interface ToolKit). With the aid of GITK (Generalized Interface ToolKit) one can build an application, without explicitly creating a concrete end-user interface. At runtime GITK can create these interfaces as needed from the abstract specification and run them. Thereby GITK is equipping one application with many interfaces, even kinds of interfaces that did not exist when the application was written. It should be noted that this work will concentrate on providing the base infrastructure for adaptive/adaptable system, and does not aim to deliver a complete solution. This work shows that the proposed solution is a fundamental concept needed to create interfaces for everyone, which can be used everywhere and at any time. This text further discusses the impact of such technology for users and on the various aspects of software systems and their development. The targeted main audience of this work are software developers or people with strong interest in software development

    Bringing context-aware access to the web through spoken interaction

    Get PDF
    The web has become the largest repository of multimedia information and its convergence with telecommunications is now bringing the benefits of web technology to hand-held devices. To optimize data access using these devices and provide services which meet the user needs through intelligent information retrieval, the system must sense and interpret the user environment and the communication context. In addition, natural spoken conversation with handheld devices makes possible the use of these applications in environments in which the use of GUI interfaces is not effective, provides a more natural human-computer interaction, and facilitates access to the web for people with visual or motor disabilities, allowing their integration and the elimination of barriers to Internet access. In this paper, we present an architecture for the design of context-aware systems that use speech to access web services. Our contribution focuses specifically on the use of context information to improve the effectiveness of providing web services by using a spoken dialog system for the user-system interaction. We also describe an application of our proposal to develop a context-aware railway information system, and provide a detailed evaluation of the influence of the context information in the quality of the services that are supplied.Research funded by projects CICYT TIN2011-28620-C02-01, CICYT TEC 2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485), and DPS2008-07029-C02-02.Publicad

    Automatic Dialog Mask Generation for Device-Independent Web Applications

    Get PDF
    When building web applications for use on different devices, developers need to deal with a wide range of input/output capabilities that affect how users interact with the application: A dialog that can be completed in one step on a desktop client may have to be broken up into a number of steps on a small-screen mobile device. Since it is time-consuming to define all the possible dialog masks and dialog flow variants for different channels manually, it would be desirable to automate the adaptation of dialog masks and flows. To address this need, we introduce the DiaDef language for the abstract, device-independent definition of the widgets in a dialog, and the DiaGen framework that automatically breaks this abstract dialog definition down into sufficiently small dialog masks for the users’ mobile devices and incorporates them into suitable micro dialog flows that are generated at run-time in order to be handled by our Dialog Control Framework

    A novel approach for data fusion and dialog management in user-adapted multimodal dialog systems

    Get PDF
    Proceedings of: 17th International Conference on Information Fusion (FUSION 2014): Salamanca, Spain 7-10 July 2014.Multimodal dialog systems have demonstrated a high potential for more flexible, usable and natural humancomputer interaction. These improvements are highly dependent on the fusion and dialog management processes, which respectively integrates and interprets multimedia multimodal information and decides the next system response for the current dialog state. In this paper we propose to carry out the multimodal fusion and dialog management processes at the dialog level in a single step. To do this, we describe an approach based on a statistical model that takes user's intention into account, generates a single representation obtained from the different input modalities and their confidence scores, and selects the next system action based on this representation. The paper also describes the practical application of the proposed approach to develop a multimodal dialog system providing travel and tourist information.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485).Publicad

    Towards the Use of Dialog Systems to Facilitate Inclusive Education

    Get PDF
    Continuous advances in the development of information technologies have currently led to the possibility of accessing learning contents from anywhere, at anytime, and almost instantaneously. However, accessibility is not always the main objective in the design of educative applications, specifically to facilitate their adoption by disabled people. Different technologies have recently emerged to foster the accessibility of computers and new mobile devices, favoring a more natural communication between the student and the developed educative systems. This chapter describes innovative uses of multimodal dialog systems in education, with special emphasis in the advantages that they provide for creating inclusive applications and learning activities

    A FRAMEWORK FOR INTELLIGENT VOICE-ENABLED E-EDUCATION SYSTEMS

    Get PDF
    Although the Internet has received significant attention in recent years, voice is still the most convenient and natural way of communicating between human to human or human to computer. In voice applications, users may have different needs which will require the ability of the system to reason, make decisions, be flexible and adapt to requests during interaction. These needs have placed new requirements in voice application development such as use of advanced models, techniques and methodologies which take into account the needs of different users and environments. The ability of a system to behave close to human reasoning is often mentioned as one of the major requirements for the development of voice applications. In this paper, we present a framework for an intelligent voice-enabled e-Education application and an adaptation of the framework for the development of a prototype Course Registration and Examination (CourseRegExamOnline) module. This study is a preliminary report of an ongoing e-Education project containing the following modules: enrollment, course registration and examination, enquiries/information, messaging/collaboration, e-Learning and library. The CourseRegExamOnline module was developed using VoiceXML for the voice user interface(VUI), PHP for the web user interface (WUI), Apache as the middle-ware and MySQL database as back-end. The system would offer dual access modes using the VUI and WUI. The framework would serve as a reference model for developing voice-based e-Education applications. The e-Education system when fully developed would meet the needs of students who are normal users and those with certain forms of disabilities such as visual impairment, repetitive strain injury (RSI), etc, that make reading and writing difficult

    Building multi-domain conversational systems from single domain resources

    Get PDF
    Current Advances In The Development Of Mobile And Smart Devices Have Generated A Growing Demand For Natural Human-Machine Interaction And Favored The Intelligent Assistant Metaphor, In Which A Single Interface Gives Access To A Wide Range Of Functionalities And Services. Conversational Systems Constitute An Important Enabling Technology In This Paradigm. However, They Are Usually Defined To Interact In Semantic-Restricted Domains In Which Users Are Offered A Limited Number Of Options And Functionalities. The Design Of Multi-Domain Systems Implies That A Single Conversational System Is Able To Assist The User In A Variety Of Tasks. In This Paper We Propose An Architecture For The Development Of Multi-Domain Conversational Systems That Allows: (1) Integrating Available Multi And Single Domain Speech Recognition And Understanding Modules, (2) Combining Available System In The Different Domains Implied So That It Is Not Necessary To Generate New Expensive Resources For The Multi-Domain System, (3) Achieving Better Domain Recognition Rates To Select The Appropriate Interaction Management Strategies. We Have Evaluated Our Proposal Combining Three Systems In Different Domains To Show That The Proposed Architecture Can Satisfactory Deal With Multi-Domain Dialogs. (C) 2017 Elsevier B.V. All Rights Reserved.Work partially supported by projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02
    corecore