755 research outputs found

    Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection

    Full text link
    In order to avoid the "Midas Touch" problem, gaze-based interfaces for selection often introduce a dwell time: a fixed amount of time the user must fixate upon an object before it is selected. Past interfaces have used a uniform dwell time across all objects. Here, we propose a gaze-based browser using a two-step selection policy with variable dwell time. In the first step, a command, e.g. "back" or "select", is chosen from a menu using a dwell time that is constant across the different commands. In the second step, if the "select" command is chosen, the user selects a hyperlink using a dwell time that varies between different hyperlinks. We assign shorter dwell times to more likely hyperlinks and longer dwell times to less likely hyperlinks. In order to infer the likelihood each hyperlink will be selected, we have developed a probabilistic model of natural gaze behavior while surfing the web. We have evaluated a number of heuristic and probabilistic methods for varying the dwell times using both simulation and experiment. Our results demonstrate that varying dwell time improves the user experience in comparison with fixed dwell time, resulting in fewer errors and increased speed. While all of the methods for varying dwell time resulted in improved performance, the probabilistic models yielded much greater gains than the simple heuristics. The best performing model reduces error rate by 50% compared to 100ms uniform dwell time while maintaining a similar response time. It reduces response time by 60% compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor & Francis in the International Journal of Human-Computer Interaction on 30 March, 2018, available online: http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of the final published article, please access: https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful

    Augmentative and alternative communication (AAC) advances: A review of configurations for individuals with a speech disability

    Get PDF
    High-tech augmentative and alternative communication (AAC) methods are on a constant rise; however, the interaction between the user and the assistive technology is still challenged for an optimal user experience centered around the desired activity. This review presents a range of signal sensing and acquisition methods utilized in conjunction with the existing high-tech AAC platforms for individuals with a speech disability, including imaging methods, touch-enabled systems, mechanical and electro-mechanical access, breath-activated methods, and brain–computer interfaces (BCI). The listed AAC sensing modalities are compared in terms of ease of access, affordability, complexity, portability, and typical conversational speeds. A revelation of the associated AAC signal processing, encoding, and retrieval highlights the roles of machine learning (ML) and deep learning (DL) in the development of intelligent AAC solutions. The demands and the affordability of most systems hinder the scale of usage of high-tech AAC. Further research is indeed needed for the development of intelligent AAC applications reducing the associated costs and enhancing the portability of the solutions for a real user’s environment. The consolidation of natural language processing with current solutions also needs to be further explored for the amelioration of the conversational speeds. The recommendations for prospective advances in coming high-tech AAC are addressed in terms of developments to support mobile health communicative applications

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Motivation Modelling and Computation for Personalised Learning of People with Dyslexia

    Get PDF
    The increasing development of e-learning systems in recent decades has benefited ubiquitous computing and education by providing freedom of choice to satisfy various needs and preferences about learning places and paces. Automatic recognition of learners’ states is necessary for personalised services or intervention to be provided in e-learning environments. In current literature, assessment of learners’ motivation for personalised learning based on the motivational states is lacking. An effective learning environment needs to address learners’ motivational needs, particularly, for those with dyslexia. Dyslexia or other learning difficulties can cause young people not to engage fully with the education system or to drop out due to complex reasons: in addition to the learning difficulties related to reading, writing or spelling, psychological difficulties are more likely to be ignored such as lower academic self-worth and lack of learning motivation caused by the unavoidable learning difficulties. Associated with both cognitive processes and emotional states, motivation is a multi-facet concept that consequences in the continued intention to use an e-learning system and thus a better chance of learning effectiveness and success. It consists of factors from intrinsic motivation driven by learners’ inner feeling of interest or challenges and those from extrinsic motivation associated with external reward or compliments. These factors represent learners’ various motivational needs; thus, understanding this requires a multidisciplinary approach. Combining different perspectives of knowledge on psychological theories and technology acceptance models with the empirical findings from a qualitative study with dyslexic students conducted in the present research project, motivation modelling for people with dyslexia using a hybrid approach is the main focus of this thesis. Specifically, in addition to the contribution to the qualitative conceptual motivation model and ontology-based computational model that formally expresses the motivational factors affecting users’ continued intention to use e-learning systems, this thesis also conceives a quantitative approach to motivation modelling. A multi-item motivation questionnaire is designed and employed in a quantitative study with dyslexic students, and structural equation modelling techniques are used to quantify the influences of the motivational factors on continued use intention and their interrelationships in the model. In addition to the traditional approach to motivation computation that relies on learners’ self-reported data, this thesis also employs dynamic sensor data and develops classification models using logistic regression for real-time assessment of motivational states. The rule-based reasoning mechanism for personalising motivational strategies and a framework of motivationally personalised e-learning systems are introduced to apply the research findings to e-learning systems in real-world scenarios. The motivation model, sensor-based computation and rule-based personalisation have been applied to a practical scenario with an essential part incorporated in the prototype of a gaze-based learning application that can output personalised motivational strategies during the learning process according to the real-time assessment of learners’ motivational states based on both the eye-tracking data in addition to users’ self-reported data. Evaluation results have indicated the advantage of the application implemented compared to the traditional one without incorporating the present research findings for monitoring learners’ motivation states with gaze data and generating personalised feedback. In summary, the present research project has: 1) developed a conceptual motivation model for students with dyslexia defining the motivational factors that influence their continued intention to use e-learning systems based on both a qualitative empirical study and prior research and theories; 2) developed an ontology-based motivation model in which user profiles, factors in the motivation model and personalisation options are structured as a hierarchy of classes; 3) designed a multi-item questionnaire, conducted a quantitative empirical study, used structural equation modelling to further explore and confirm the quantified impacts of motivational factors on continued use intention and the quantified relationships between the factors; 4) conducted an experiment to exploit sensors for motivation computation, and developed classification models for real-time assessment of the motivational states pertaining to each factor in the motivation model based on empirical sensor data including eye gaze data and EEG data; 5) proposed a sensor-based motivation assessment system architecture with emphasis on the use of ontologies for a computational representation of the sensor features used for motivation assessment in addition to the representation of the motivation model, and described the semantic rule-based personalisation of motivational strategies; 6) proposed a framework of motivationally personalised e-learning systems based on the present research, with the prototype of a gaze-based learning application designed, implemented and evaluated to guide future work

    Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges

    Get PDF
    Intelligent vehicles and advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver status since ADAS share the vehicle control authorities with the human driver. This study provides an overview of the ego-vehicle driver intention inference (DII), which mainly focus on the lane change intention on highways. First, a human intention mechanism is discussed in the beginning to gain an overall understanding of the driver intention. Next, the ego-vehicle driver intention is classified into different categories based on various criteria. A complete DII system can be separated into different modules, which consists of traffic context awareness, driver states monitoring, and the vehicle dynamic measurement module. The relationship between these modules and the corresponding impacts on the DII are analyzed. Then, the lane change intention inference (LCII) system is reviewed from the perspective of input signals, algorithms, and evaluation. Finally, future concerns and emerging trends in this area are highlighted
    • 

    corecore