10,480 research outputs found

    Instrument for the assessment of road user automated vehicle acceptance: A pyramid of user needs of automated vehicles

    Full text link
    This study proposed a new methodological approach for the assessment of automated vehicle acceptance (AVA) from the perspective of road users inside and outside of AVs pre- and post- AV experience. Users can be drivers and passengers, but also external road users, such as pedestrians, (motor-)cyclists, and other car drivers, interacting with AVs. A pyramid was developed, which provides a hierarchical representation of user needs. Fundamental user needs are organized at the bottom of the pyramid, while higher-level user needs are at the top of the pyramid. The pyramid distinguishes between six levels of needs, which are safety trust, efficiency, comfort and pleasure, social influence, and well-being. Some user needs universally exist across users, while some are user-specific needs. These needs are translated into operationalizable indicators representing items of a questionnaire for the assessment of AVA of users inside and outside AVs. The formulation of the questionnaire items was derived from established technology acceptance models. As the instrument was based on the same model for all road users, the comparison of AVA between different road users is now possible. We recommend future research to validate this questionnaire, administering it in studies to contribute to the development of a short, efficient, and standardized metric for the assessment of AVA.Comment: 17 pages, 1 figur

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    The Challenges in Modeling Human Performance in 3D Space with Fitts’ Law

    Get PDF
    With the rapid growth in virtual reality technologies, object interaction is becoming increasingly more immersive, elucidating human perception and leading to promising directions towards evaluating human performance under different settings. This spike in technological growth exponentially increased the need for a human performance metric in 3D space. Fitts' law is perhaps the most widely used human prediction model in HCI history attempting to capture human movement in lower dimensions. Despite the collective effort towards deriving an advanced extension of a 3D human performance model based on Fitts' law, a standardized metric is still missing. Moreover, most of the extensions to date assume or limit their findings to certain settings, effectively disregarding important variables that are fundamental to 3D object interaction. In this review, we investigate and analyze the most prominent extensions of Fitts' law and compare their characteristics pinpointing to potentially important aspects for deriving a higher-dimensional performance model. Lastly, we mention the complexities, frontiers as well as potential challenges that may lay ahead.Comment: Accepted at ACM CHI 2021 Conference on Human Factors in Computing Systems (CHI '21 Extended Abstracts

    Unsupervised Behaviour Analysis and Magnification (uBAM) using Deep Learning

    Full text link
    Motor behaviour analysis is essential to biomedical research and clinical diagnostics as it provides a non-invasive strategy for identifying motor impairment and its change caused by interventions. State-of-the-art instrumented movement analysis is time- and cost-intensive, since it requires placing physical or virtual markers. Besides the effort required for marking keypoints or annotations necessary for training or finetuning a detector, users need to know the interesting behaviour beforehand to provide meaningful keypoints. We introduce unsupervised behaviour analysis and magnification (uBAM), an automatic deep learning algorithm for analysing behaviour by discovering and magnifying deviations. A central aspect is unsupervised learning of posture and behaviour representations to enable an objective comparison of movement. Besides discovering and quantifying deviations in behaviour, we also propose a generative model for visually magnifying subtle behaviour differences directly in a video without requiring a detour via keypoints or annotations. Essential for this magnification of deviations even across different individuals is a disentangling of appearance and behaviour. Evaluations on rodents and human patients with neurological diseases demonstrate the wide applicability of our approach. Moreover, combining optogenetic stimulation with our unsupervised behaviour analysis shows its suitability as a non-invasive diagnostic tool correlating function to brain plasticity.Comment: Published in Nature Machine Intelligence (2021), https://rdcu.be/ch6p

    High accuracy context recovery using clustering mechanisms

    Get PDF
    This paper examines the recovery of user context in indoor environmnents with existing wireless infrastructures to enable assistive systems. We present a novel approach to the extraction of user context, casting the problem of context recovery as an unsupervised, clustering problem. A well known density-based clustering technique, DBSCAN, is adapted to recover user context that includes user motion state, and significant places the user visits from WiFi observations consisting of access point id and signal strength. Furthermore, user rhythms or sequences of places the user visits periodically are derived from the above low level contexts by employing state-of-the-art probabilistic clustering technique, the Latent Dirichiet Allocation (LDA), to enable a variety of application services. Experimental results with real data are presented to validate the proposed unsupervised learning approach and demonstrate its applicability.<br /

    Bringing Computational Thinking to Nonengineering Students through a Capstone Course

    Get PDF
    Although the concept of computational thinking has flourished, little research has explored how to integrate various elements of computational thinking into an undergraduate classroom setting. Clarifying core concepts of computational thinking and providing empirical cases that apply computational thinking practices into a real-world educational setting is crucial to the success of software engineering education. In this article, we describe the development of a curriculum for a social innovation capstone course, using core concepts and elements of computational thinking. The course was designed for undergraduate students of a liberal arts college at a university in Korea. Students were asked to define a social problem and introduced to the core concepts and processes of computational thinking aided by Arduino and Raspberry Pi programming environments. After building a business model, they implemented a working prototype for their proposed solution. We document class project outcomes and student feedback to demonstrate the effectiveness of the approach
    corecore