9,349 research outputs found

    Multimodal agent interfaces and system architectures for health and fitness companions

    Get PDF
    Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces

    A Study of User's Performance and Satisfaction on the Web Based Photo Annotation with Speech Interaction

    Get PDF
    This paper reports on empirical evaluation study of users' performance and satisfaction with prototype of Web Based speech photo annotation with speech interaction. Participants involved consist of Johor Bahru citizens from various background. They have completed two parts of annotation task; part A involving PhotoASys; photo annotation system with proposed speech interaction and part B involving Microsoft Microsoft Vista Speech Interaction style. They have completed eight tasks for each part including system login and selection of album and photos. Users' performance was recorded using computer screen recording software. Data were captured on the task completion time and subjective satisfaction. Participants need to complete a questionnaire on the subjective satisfaction when the task was completed. The performance data show the comparison between proposed speech interaction and Microsoft Vista Speech interaction applied in photo annotation system, PhotoASys. On average, the reduction in annotation performance time due to using proposed speech interaction style was 64.72% rather than using speech interaction Microsoft Vista style. Data analysis were showed in different statistical significant in annotation performance and subjective satisfaction for both styles of interaction. These results could be used for the next design in related software which involves personal belonging management.Comment: IEEE Publication Format, https://sites.google.com/site/journalofcomputing

    Generic dialogue modeling for multi-application dialogue systems

    Get PDF
    We present a novel approach to developing interfaces for multi-application dialogue systems. The targeted interfaces allow transparent switching between a large number of applications within one system. The approach, based on the Rapid Dialogue Prototyping Methodology (RDPM) and the Vector Space model techniques from Information Retrieval, is composed of three main steps: (1) producing finalized dia logue models for applications using the RDPM, (2) designing an application interaction hierarchy, and (3) navigating between the applications based on the user's application of interest

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    A Multi-channel Application Framework for Customer Care Service Using Best-First Search Technique

    Get PDF
    It has become imperative to find a solution to the dissatisfaction in response by mobile service providers when interacting with their customer care centres. Problems faced with Human to Human Interaction (H2H) between customer care centres and their customers include delayed response time, inconsistent solutions to questions or enquires and lack of dedicated access channels for interaction with customer care centres in some cases. This paper presents a framework and development techniques for a multi-channel application providing Human to System (H2S) interaction for customer care centre of a mobile telecommunication provider. The proposed solution is called Interactive Customer Service Agent (ICSA). Based on single-authoring, it will provide three media of interaction with the customer care centre of a mobile telecommunication operator: voice, phone and web browsing. A mathematical search technique called Best-First Search to generate accurate results in a search environmen

    A toolkit of mechanism and context independent widgets

    Get PDF
    Most human-computer interfaces are designed to run on a static platform (e.g. a workstation with a monitor) in a static environment (e.g. an office). However, with mobile devices becoming ubiquitous and capable of running applications similar to those found on static devices, it is no longer valid to design static interfaces. This paper describes a user-interface architecture which allows interactors to be flexible about the way they are presented. This flexibility is defined by the different input and output mechanisms used. An interactor may use different mechanisms depending upon their suitability in the current context, user preference and the resources available for presentation using that mechanism

    Improving Speech Interaction in Vehicles Using Context-Aware Information through A SCXML Framework

    Get PDF
    Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications

    User interfaces for anyone anywhere

    Get PDF
    In a global context of multimodal man-machine interaction, we approach a wide spectrum of fields, such as software engineering, intelligent communication and speech dialogues. This paper presents technological aspects of the shifting from the traditional desktop interfaces to more expressive, natural, flexible and portable ones, where more persons, in a greater number of situations, will be able to interact with computers. Speech appears to be one of the best forms of interaction, especially in order to support non-skilled users. Modalities such as speech, among others, tend to be very relevant to accessing information in our future society, in which mobile devices will play a preponderant role. Therefore, we are placing an emphasis on verbal communication in open environments (Java/XML) using software agent technology.Fundação para a Ciência e a Tecnologia – PRAXIS XXI /BD/20095/99 ; Germany. Ministry of Science and Education – EMBASSI – 01IL90
    corecore