16,133 research outputs found

    Modelling and evaluating drivers’ interactions with in-vehicle information systems (IVIS)

    No full text
    Evaluating the usability of In-Vehicle Information Systems (IVIS) guides engineers in understanding the interaction design limitations of current systems and assessing the potential of concept technologies. The complexity and diversity of the driving task presents a unique challenge in defining usability: user-IVIS interactions create a dual-task scenario, in which conflicts can arise between the primary driving tasks and secondary IVIS tasks. This, and the safety-critical nature of driving, must be specified in defining and evaluating IVIS usability.Work was carried out in the initial phases of this project to define usability for IVIS and to develop a framework for evaluation. One of the key findings of this work was the importance of context-of-use in defining usability, so that specific usability criteria and appropriate evaluation methods can be identified. The evaluation methods in the framework were categorised as either analytic, i.e. applicable at the earliest stages of product development to predict performance and usability; or empirical, i.e. to measure user performance under simulated or real-world conditions. Two case studies have shown that the evaluation framework is sensitive to differences between IVIS and can identify important usability issues, which can be used to inform design improvements.The later stages of the project have focussed on Multimodal Critical Path Analysis (CPA). Initially, CPA was used to predict IVIS task interaction times for a stationary vehicle. The CPA model was extended to produce fastperson and slowperson task time estimates, as well as average predictions. In order for the CPA to be of real use to designers of IVIS, it also needed to predict dual-task IVIS interaction times, i.e. time taken to perform IVIS tasks whilst driving. A hypothesis of shared glances was developed, proposing that drivers are able to monitor two visual information sources simultaneously. The CPA technique was extended for prediction of dual-task interaction times by modelling this shared glance pattern. The hypothesis has important implications for theories of visual behaviour and for the design of future IVIS

    Digital interaction: where are we going?

    Get PDF
    In the framework of the AVI 2018 Conference, the interuniversity center ECONA has organized a thematic workshop on "Digital Interaction: where are we going?". Six contributions from the ECONA members investigate different perspectives around this thematic

    An Introduction to 3D User Interface Design

    Get PDF
    3D user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of three-dimensional (3D) interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3D tasks and the use of traditional two-dimensional interaction styles in 3D environments. We divide most user interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques, but also practical guidelines for 3D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3D interaction design, and some example applications with complex 3D interaction requirements. We also present an annotated online bibliography as a reference companion to this article

    Vision systems with the human in the loop

    Get PDF
    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed

    GazeTouchPIN: Protecting Sensitive Data on Mobile Devices Using Secure Multimodal Authentication

    Get PDF
    Although mobile devices provide access to a plethora of sensitive data, most users still only protect them with PINs or patterns, which are vulnerable to side-channel attacks (e.g., shoulder surfing). How-ever, prior research has shown that privacy-aware users are willing to take further steps to protect their private data. We propose GazeTouchPIN, a novel secure authentication scheme for mobile devices that combines gaze and touch input. Our multimodal approach complicates shoulder-surfing attacks by requiring attackers to ob-serve the screen as well as the user’s eyes to and the password. We evaluate the security and usability of GazeTouchPIN in two user studies (N=30). We found that while GazeTouchPIN requires longer entry times, privacy aware users would use it on-demand when feeling observed or when accessing sensitive data. The results show that successful shoulder surfing attack rate drops from 68% to 10.4%when using GazeTouchPIN
    corecore