305 research outputs found

    Evaluating Direct Pointing and Indirect Cursor Interactions with Fitts' Law in Stereoscopic Environments

    Get PDF
    The development of virtual environment research has reached the stage of human interaction with three-dimensional (3D) objects. In this study, Fitts' method was used to such interaction techniques in virtual environment, and the Fitts' law applicability in 3D virtual environment was also considered. The experiment included two modes of interaction: direct interaction and indirect interaction that utilize different techniques depending on how users interact with 3D objects. Both interaction techniques were conducted in three indexes of difficulties and three egocentric target distances (a distance from participant to target). Movement time and throughput were measured for each interaction technique. The results show that the direct pointing technique is more efficient for interaction with the targets close to the participant, while the indirect cursor technique may be a viable option for targets further away from participant. Throughputs were found to be significantly higher for the direct pointing technique compared to the indirect cursor technique. The results of the mean movement time were highly correlated with the targets' index of difficulty for all interaction techniques, supporting evidence that Fitts' law can be applied to the interactions in 3D virtual environment. Based on the results, developers of VE application may relate to these findings in designing proper users' interactions

    Development of an Eye-Gaze Input System With High Speed and Accuracy through Target Prediction Based on Homing Eye Movements

    Get PDF
    In this study, a method to predict a target on the basis of the trajectory of eye movements and to increase the pointing speed while maintaining high predictive accuracy is proposed. First, a predictive method based on ballistic (fast) eye movements (Approach 1) was evaluated in terms of pointing speed and predictive accuracy. In Approach 1, the so-called Midas touch problem (pointing to an unintended target) occurred, particularly when a small number of samples was used to predict a target. Therefore, to overcome the poor predictive accuracy of Approach 1, we developed a new predictive method (Approach 2) using homing (slow) eye movements rather than ballistic (fast) eye movements. Approach 2 overcame the disadvantage (inaccurate prediction) of Approach 1 by shortening the pointing time while maintaining high predictive accuracy

    Evaluating Direct Pointing and Indirect Cursor Interactions with Fitts' Law in Stereoscopic Environments

    Get PDF
    The development of virtual environment research has reached the stage of human interaction with three-dimensional (3D) objects. In this study, Fitts' method was used to such interaction techniques in virtual environment, and the Fitts' law applicability in 3D virtual environment was also considered. The experiment included two modes of interaction: direct interaction and indirect interaction that utilize different techniques depending on how users interact with 3D objects. Both interaction techniques were conducted in three indexes of difficulties and three egocentric target distances (a distance from participant to target). Movement time and throughput were measured for each interaction technique. The results show that the direct pointing technique is more efficient for interaction with the targets close to the participant, while the indirect cursor technique may be a viable option for targets further away from participant. Throughputs were found to be significantly higher for the direct pointing technique compared to the indirect cursor technique. The results of the mean movement time were highly correlated with the targets' index of difficulty for all interaction techniques, supporting evidence that Fitts' law can be applied to the interactions in 3D virtual environment. Based on the results, developers of VE application may relate to these findings in designing proper users' interactions

    Gaze–mouse coordinated movements and dependency with coordination demands in tracing.

    Get PDF
    Eye movements have been shown to lead hand movements in tracing tasks where subjects have to move their fingers along a predefined trace. The question remained, whether the leading relationship was similar when tracing with a pointing device, such as a mouse; more importantly, whether tasks that required more or less gaze–mouse coordination would introduce variation in this pattern of behaviour, in terms of both spatial and temporal leading of gaze position to mouse movement. A three-level gaze–mouse coordination demand paradigm was developed to address these questions. A substantial dataset of 1350 trials was collected and analysed. The linear correlation of gaze–mouse movements, the statistical distribution of the lead time, as well as the lead distance between gaze and mouse cursor positions were all considered, and we proposed a new method to quantify lead time in gaze–mouse coordination. The results supported and extended previous empirical findings that gaze often led mouse movements. We found that the gaze–mouse coordination demands of the task were positively correlated to the gaze lead, both spatially and temporally. However, the mouse movements were synchronised with or led gaze in the simple straight line condition, which demanded the least gaze–mouse coordination

    Addressing Situational and Physical Impairments and Disabilities with a Gaze-Assisted, Multi-Modal, Accessible Interaction Paradigm

    Get PDF
    Every day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. Unfortunately, individuals with physical impairments and disabilities, by birth or due to an injury, are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions. In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate “point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts’ Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews. From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the “Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts’ Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch. Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. We have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general. Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks. Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm

    Improving eye–computer interaction interface design: Ergonomic investigations of the optimum target size and gaze-triggering dwell time

    Get PDF
    The Midas touch is reflected by the interactive feedback of interface functional elements, and a low level of spatial accuracy is related to the interaction area. This study tried to solve these two problems from the perspective of human-computer interactions and ergonomics. Two experiments were conducted to explore the optimum target size and gaze-triggering dwell time of the eye–computer interaction (ECI) system. Experimental Series 1 was used as the pre-experiment to identify the size that has a greater task completion rate. Experimental Series 2 was used as the main experiment to investigate the optimum gaze-triggering dwell time by using a comprehensive evaluation of the task completion rate, reaction time, and NASA-TLX (Task Load Index). In Experimental Series 1, the optimal element size was determined to be 256 × 256p x2. The conclusion of Experimental Series 2 was that when the dwell time is set to 600 ms, the efficiency of the interface is the highest, and the task load of subjects is minimal as well. Finally, the results of Experiment Series 1 and 2 have positive effects on improving the usability of the interface. The optimal control size and the optimal dwell time obtained from the experiments have certain reference and application value for interface design and software development of the ECI system

    Prediction of user action in moving-target selection tasks

    Get PDF
    Selection of moving targets is a common task in human–computer interaction (HCI), and more specifically in virtual reality (VR). In spite of the increased number of applications involving moving–target selection, HCI and VR studies have largely focused on static-target selection. Compared to its static-target counterpart, however, moving-target selection poses special challenges, including the need to continuously and simultaneously track the target and plan to reach for it, which may be difficult depending on the user’s reactiveness and the target’s movement. Action prediction has proven to be the most comprehensive enhancement to address moving-target selection challenges. Current predictive techniques, however, heavily rely on continuous tracking of user actions, without considering the possibility that target-reaching actions may have a dominant pre-programmed component—this theory is known as the pre-programmed control theory. Thus, based on the pre-programmed control theory, this research explores the possibility of predicting moving-target selection prior to action execution. Specifically, three levels of action prediction are investigated: action performance, prospective action difficulty, and intention. The proposed performance models predict the movement time (MT) required to reach for a moving target in 2-D and 3-D space, and are useful to compare users and interfaces objectively. The prospective difficulty (PD) models predict the subjective effort required to reach for a moving target, without actually executing the action, and can therefore be measured when performance can not. Finally, the intention models predict the target that the user plans to select, and can therefore be used to facilitate the selection of the intended target. Intention prediction models are developed using decision trees and scoring functions, and evaluated in two VR studies: the first investigates undirected selection (i.e., tasks in which the users are free to select an object among multiple others), and the second directed selection (i.e., the more common experimental task in which users are instructed to select a specific object). PD models for 1-D, and 2-D moving-target selection tasks are developed based on Fitts’ Law, and evaluated in an online experiment. Finally, MT models with the same structural form of the aforementioned PD models are evaluated in a 3-D moving-target selection experiment deployed in VR. Aside from intention predictions on directed selection, all of the explored models yield relatively high accuracies—up to ~78% predicting intended targets in undirected tasks, R^2 = .97 predicting PD, and R^2 = .93 predicting MT

    Methods and metrics for the improvement of the interaction and the rehabilitation of cerebral palsy through inertial technology

    Get PDF
    Cerebral palsy (CP) is one of the most limiting disabilities in childhood, with 2.2 cases per 1000 1-year survivors. It is a disorder of movement and posture due to a defect or lesion of the immature brain during the pregnancy or the birth. These motor limitations appear frequently in combination with sensory and cognitive alterations generally result in great difficulties for some people with CP to manipulate objects, communicate and interact with their environment, as well as limiting their mobility. Over the last decades, instruments such as personal computers have become a popular tool to overcome some of the motor limitations and promote neural plasticity, especially during childhood. According to some estimations, 65% of youths with CP that present severely limited manipulation skills cannot use standard mice nor keyboards. Unfortunately, even when people with CP use assistive technology for computer access, they face barriers that lead to the use of typical mice, track balls or touch screens for practical reasons. Nevertheless, with the proper customization, novel developments of alternative input devices such as head mice or eye trackers can be a valuable solution for these individuals. This thesis presents a collection of novel mapping functions and facilitation algorithms that were proposed and designed to ease the act of pointing to graphical elements on the screen—the most elemental task in human-computer interaction—to individuals with CP. These developments were implemented to be used with any head mouse, although they were all tested with the ENLAZA, an inertial interface. The development of such techniques required the following approach: Developing a methodology to evaluate the performance of individuals with CP in pointing tasks, which are usually described as two sequential subtasks: navigation and targeting. Identifying the main motor abnormalities that are present in individuals with CP as well as assessing the compliance of these people with standard motor behaviour models such as Fitts’ law. Designing and validating three novel pointing facilitation techniques to be implemented in a head mouse. They were conceived for users with CP and muscle weakness that have great difficulties to maintain their heads in a stable position. The first two algorithms consist in two novel mapping functions that aim to facilitate the navigation phase, whereas the third technique is based in gravity wells and was specially developed to facilitate the selection of elements in the screen. In parallel with the development of the facilitation techniques for the interaction process, we evaluated the feasibility of use inertial technology for the control of serious videogames as a complement to traditional rehabilitation therapies of posture and balance. The experimental validation here presented confirms that this concept could be implemented in clinical practice with good results. In summary, the works here presented prove the suitability of using inertial technology for the development of an alternative pointing device—and pointing algorithms—based on movements of the head for individuals with CP and severely limited manipulation skills and new rehabilitation therapies for the improvement of posture and balance. All the contributions were validated in collaboration with several centres specialized in CP and similar disorders and users with disability recruited in those centres.La parálisis cerebral (PC) es una de las deficiencias más limitantes de la infancia, con un incidencia de 2.2 casos por cada 1000 supervivientes tras un año de vida. La PC se manifiesta principalmente como una alteración del movimiento y la postura y es consecuencia de un defecto o lesión en el cerebro inmaduro durante el embarazo o el parto. Las limitaciones motrices suelen aparecer además en compañía de alteraciones sensoriales y cognitivas, lo que provoca por lo general grandes dificultades de movilidad, de manipulación, de relación y de interacción con el entorno. En las últimas décadas, el ordenador personal se ha extendido como herramienta para la compensación de parte de estas limitaciones motoras y como medio de promoción de la neuroplasticidad, especialmente durante la infancia. Desafortunadamente, cerca de un 65% de las personas PC que son diagnosticadas con limitaciones severas de manipulación son incapaces de utilizar ratones o teclados convencionales. A veces, ni siquiera la tecnología asistencial les resulta de utilidad ya que se encuentran con impedimentos que hacen que opten por usar dispositivos tradicionales aun sin dominar su manejo. Para estas personas, los desarrollos recientes de ratones operados a través de movimientos residuales con la cabeza o la mirada podrían ser una solución válida, siempre y cuando se personalice su manejo. Esta tesis presenta un conjunto de novedosas funciones de mapeo y algoritmos de facilitaci ón que se han propuesto y diseñado con el ánimo de ayudar a personas con PC en las tareas de apuntamiento de objetos en la pantalla —las más elementales dentro de la interacción con el ordenador. Aunque todas las contribuciones se evaluaron con la interfaz inercial ENLAZA, desarrollada igualmente en nuestro grupo, podrían ser aplicadas a cualquier ratón basado en movimientos de cabeza. El desarrollo de los trabajos se resume en las siguientes tareas abordadas: Desarrollo de una metodología para la evaluación de la habilidad de usuarios con PC en tareas de apuntamiento, que se contemplan como el encadenamiento de dos sub-tareas: navegación (alcance) y selección (clic). Identificación de los tipos de alteraciones motrices presentes en individuos con PC y el grado de ajuste de éstos a modelos estándares de comportamiento motriz como puede ser la ley de Fitts. Propuesta y validación de tres técnicas de facilitación del alcance para ser implementadas en un ratón basado en movimientos de cabeza. La facilitación se ha centrado en personas que presentan debilidad muscular y dificultades para mantener la posición de la cabeza. Mientras que los dos primeros algoritmos se centraron en facilitar la navegación, el tercero tuvo como objetivo ayudar en la selección a través de una técnica basada en pozos gravitatorios de proximidad. En paralelo al desarrollo de estos algoritmos de facilitación de la interacción, evaluamos la posibilidad de utilizar tecnología inercial para el control de videojuegos en rehabilitación. Nuestra validación experimental demostró que este concepto puede implementarse en la práctica clínica como complemento a terapias tradicionales de rehabilitación de la postura y el equilibrio. Como conclusión, los trabajos desarrollados en esta tesis vienen a constatar la idoneidad de utilizar sensores inerciales para el desarrollo de interfaces de accesso alternativo al ordenador basados en movimientos residuales de la cabeza para personas con limitaciones severas de manipulación. Esta solución se complementa con algoritmos de facilitación del alcance. Por otra parte, estas soluciones tecnológicas de interfaz con el ordenador representan igualmente un complemento de terapias tradicionales de rehabilitación de la postura y el equilibrio. Todas las contribuciones se validaron en colaboración con una serie de centros especializados en parálisis cerebral y trastornos afines contando con usuarios con discapacidad reclutados en dichos centros.This thesis was completed in the Group of Neural and Cognitive Engineering (gNEC) of the CAR UPM-CSIC with the financial support of the FP7 Framework EU Research Project ABC (EU-2012-287774), the IVANPACE Project (funded by Obra Social de Caja Cantabria, 2012-2013), and the Spanish Ministry of Economy and Competitiveness in the framework of two projects: the Interplay Project (RTC-2014-1812-1) and most recently the InterAAC Project (RTC-2015-4327-1)Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Juan Manuel Belda Lois.- Secretario: María Dolores Blanco Rojas.- Vocal: Luis Fernando Sánchez Sante
    corecore