34,400 research outputs found

    Adaptive multi-modal interface model concerning mental workload in take-over request during semi-autonomous driving

    Get PDF
    With the development of automated driving technologies, human factors involved in automated driving are gaining increasing attention for a balanced implementation of the convenience brought by the technology and safety risk in commercial vehicle models. One influential human factor is mental workload. In the take-over request (TOR) from autonomous to manual driving at level 3 of International Society of Automotive Engineers' (SAE) Levels of Driving Automation, the time window for the driver to have full comprehension of the driving environment is extremely short, which means the driver is under high mental workload. To support the driver during a TOR, we propose an adaptive multi-modal interface model concerning mental workload. In this study, we evaluated the reliability of only part of the proposed model in a driving-simulator experiment as well as using the experimental data from a previous study

    A Person-Centric Design Framework for At-Home Motor Learning in Serious Games

    Get PDF
    abstract: In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a multimodal environment for at-home training in which an autonomous system observes and guides the user in the place of a live trainer, providing real-time assessment, feedback and difficulty adaptation as the subject masters a motor skill. After an in-depth review of the latest solutions in this field, this dissertation proposes a person-centric approach to the design of this environment, in contrast to the standard techniques implemented in related work, to address many of the limitations of these approaches. The unique advantages and restrictions of this approach are presented in the form of a case study in which a system entitled the "Autonomous Training Assistant" consisting of both hardware and software for guided at-home motor learning is designed and adapted for a specific individual and trainer. In this work, the design of an autonomous motor learning environment is approached from three areas: motor assessment, multimodal feedback, and serious game design. For motor assessment, a 3-dimensional assessment framework is proposed which comprises of 2 spatial (posture, progression) and 1 temporal (pacing) domains of real-time motor assessment. For multimodal feedback, a rod-shaped device called the "Intelligent Stick" is combined with an audio-visual interface to provide feedback to the subject in three domains (audio, visual, haptic). Feedback domains are mapped to modalities and feedback is provided whenever the user's performance deviates from the ideal performance level by an adaptive threshold. Approaches for multi-modal integration and feedback fading are discussed. Finally, a novel approach for stealth adaptation in serious game design is presented. This approach allows serious games to incorporate motor tasks in a more natural way, facilitating self-assessment by the subject. An evaluation of three different stealth adaptation approaches are presented and evaluated using the flow-state ratio metric. The dissertation concludes with directions for future work in the integration of stealth adaptation techniques across the field of exergames.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Specification Techniques for Multi-Modal Dialogues in the U-Wish Project

    Get PDF
    In this paper we describe the development of a specification\ud technique for specifying interactive web-based services. We\ud wanted to design a language that can be a means of\ud communication between designers and developers of interactive services, that makes it easier to develop web-based services fitted to the users and that shortens the pathway from design to implementation. The language, still under development, is based on process algebra and can be\ud connected to the results of task analysis. We have been\ud working on the automatic generation of executable prototypes\ud out of the specifications. In this way the specification\ud language can establish a connection between users, design\ud and implementation. A first version of this language is\ud available as well as prototype tools for executing the specifications. Ideas will be given as to how to make the connection between specifications and task analysis

    Vision systems with the human in the loop

    Get PDF
    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed

    Footprints of emergence

    Get PDF
    It is ironic that the management of education has become more closed while learning has become more open, particularly over the past 10-20 years. The curriculum has become more instrumental, predictive, standardized, and micro-managed in the belief that this supports employability as well as the management of educational processes, resources, and value. Meanwhile, people have embraced interactive, participatory, collaborative, and innovative networks for living and learning. To respond to these challenges, we need to develop practical tools to help us describe these new forms of learning which are multivariate, self-organised, complex, adaptive, and unpredictable. We draw on complexity theory and our experience as researchers, designers, and participants in open and interactive learning to go beyond conventional approaches. We develop a 3D model of landscapes of learning for exploring the relationship between prescribed and emergent learning in any given curriculum. We do this by repeatedly testing our descriptive landscapes (or footprints) against theory, research, and practice across a range of case studies. By doing this, we have not only come up with a practical tool which can be used by curriculum designers, but also realised that the curriculum itself can usefully be treated as emergent, depending on the dynamicsbetween prescribed and emergent learning and how the learning landscape is curated

    Reference Resolution in Multi-modal Interaction: Position paper

    Get PDF
    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems and how they are treated for different contexts. No generally applicable solutions are given
    corecore