3 research outputs found

    Improving Speech Interaction in Vehicles Using Context-Aware Information through A SCXML Framework

    Get PDF
    Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications

    A Dual Modal Presentation of Network Relationships in Texts

    Get PDF
    Based on Baddeley’s working memory model, this research proposed a method to convert textual information with network relationships into a “graphics + voice” representation and hypothesized that this dual-modal presentation will result in superior comprehension performance and higher satisfaction than pure textual display. A simple T-test experiment was used to test the hypothesis. The independent variable was the presentation mode: textual display vs. visual-auditory presentation. The dependent variables were user performance and satisfaction. Thirty subjects participated in this experiment. The results indicate that both user performance and satisfaction improved significantly by using the “graphic + voice” presentation

    Queueing Network Modeling of Human Performance and Mental Workload in Perceptual-Motor Tasks.

    Full text link
    Integrated with the mathematical modeling approaches, this thesis uses Queuing Network-Model Human Processors (QN-MHP) as a simulation platform to quantify human performance and mental workload in four representative perceptual-motor tasks with both theoretical and practical importance: discrete perceptual-motor tasks (transcription typing and psychological refractory period) and continuous perceptual-motor tasks (visual-manual tracking and vehicle steering with secondary tasks). The properties of queuing networks (queuing/waiting in processing information, serial and parallel information processing capability, overall mathematical structure, and entity-based network arrangement) allow QN-MHP to quantify several important aspects of the perceptual-motor tasks and unify them into one cognitive architecture. In modeling the discrete perceptual-motor task in a single task situation (transcription typing), QN-MHP quantifies and unifies 32 transcription typing phenomena involving many aspects of human performance--interkey time, typing units and spans, typing errors, concurrent task performance, eye movements, and skill effects, providing an alternative way to model this basic and common activities in human-machine interaction. In quantifying the discrete perceptual-motor task in a dual-task situation (psychological refractory period), the queuing network model is able to account for various experimental findings in PRP including all of these major counterexamples of existing models with less or equal number of free parameters and no need to use task-specific lock/unlock assumptions, demonstrating its unique advantages in modeling discrete dual-task performance. In modeling the human performance and mental workload in the continuous perceptual-motor tasks (visual-manual tracking and vehicle steering), QN-MHP is used as a simulation platform and a set of equations is developed to establish the quantitative relationships between queuing networks (e.g., subnetwork s utilization and arrival rate) and P300 amplitude measured by ERP techniques and subjective mental workload measured by NASA-TLX, predicting and visualizing mental workload in real-time. Moreover, this thesis also applies QN-MHP into the design of an adaptive workload management system in vehicles and integrates QN-MHP with scheduling methods to devise multimodal in-vehicle systems. Further development of the cognitive architecture in theory and practice is also discussed.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/55678/2/changxuw_1.pd
    corecore