552 research outputs found

    Control theoretic models of pointing

    Get PDF
    This article presents an empirical comparison of four models from manual control theory on their ability to model targeting behaviour by human users using a mouse: McRuer’s Crossover, Costello’s Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space, and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more behavioural variability. We report on characteristics of human surge behaviour (the initial, ballistic sub-movement) in pointing, as well as differences in a number of controller performance measures, including overshoot, settling time, peak time, and rise time. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts’ law based approaches in HCI, with models providing representations and predictions of human pointing dynamics, which can improve our understanding of pointing and inform design

    Breathing Life Into Biomechanical User Models

    Get PDF
    Forward biomechanical simulation in HCI holds great promise as a tool for evaluation, design, and engineering of user interfaces. Although reinforcement learning (RL) has been used to simulate biomechanics in interaction, prior work has relied on unrealistic assumptions about the control problem involved, which limits the plausibility of emerging policies. These assumptions include direct torque actuation as opposed to muscle-based control; direct, privileged access to the external environment, instead of imperfect sensory observations; and lack of interaction with physical input devices. In this paper, we present a new approach for learning muscle-actuated control policies based on perceptual feedback in interaction tasks with physical input devices. This allows modelling of more realistic interaction tasks with cognitively plausible visuomotor control. We show that our simulated user model successfully learns a variety of tasks representing different interaction methods, and that the model exhibits characteristic movement regularities observed in studies of pointing. We provide an open-source implementation which can be extended with further biomechanical models, perception models, and interactive environments.publishedVersio

    Prediction of user action in moving-target selection tasks

    Get PDF
    Selection of moving targets is a common task in human–computer interaction (HCI), and more specifically in virtual reality (VR). In spite of the increased number of applications involving moving–target selection, HCI and VR studies have largely focused on static-target selection. Compared to its static-target counterpart, however, moving-target selection poses special challenges, including the need to continuously and simultaneously track the target and plan to reach for it, which may be difficult depending on the user’s reactiveness and the target’s movement. Action prediction has proven to be the most comprehensive enhancement to address moving-target selection challenges. Current predictive techniques, however, heavily rely on continuous tracking of user actions, without considering the possibility that target-reaching actions may have a dominant pre-programmed component—this theory is known as the pre-programmed control theory. Thus, based on the pre-programmed control theory, this research explores the possibility of predicting moving-target selection prior to action execution. Specifically, three levels of action prediction are investigated: action performance, prospective action difficulty, and intention. The proposed performance models predict the movement time (MT) required to reach for a moving target in 2-D and 3-D space, and are useful to compare users and interfaces objectively. The prospective difficulty (PD) models predict the subjective effort required to reach for a moving target, without actually executing the action, and can therefore be measured when performance can not. Finally, the intention models predict the target that the user plans to select, and can therefore be used to facilitate the selection of the intended target. Intention prediction models are developed using decision trees and scoring functions, and evaluated in two VR studies: the first investigates undirected selection (i.e., tasks in which the users are free to select an object among multiple others), and the second directed selection (i.e., the more common experimental task in which users are instructed to select a specific object). PD models for 1-D, and 2-D moving-target selection tasks are developed based on Fitts’ Law, and evaluated in an online experiment. Finally, MT models with the same structural form of the aforementioned PD models are evaluated in a 3-D moving-target selection experiment deployed in VR. Aside from intention predictions on directed selection, all of the explored models yield relatively high accuracies—up to ~78% predicting intended targets in undirected tasks, R^2 = .97 predicting PD, and R^2 = .93 predicting MT

    Methods and metrics for the improvement of the interaction and the rehabilitation of cerebral palsy through inertial technology

    Get PDF
    Cerebral palsy (CP) is one of the most limiting disabilities in childhood, with 2.2 cases per 1000 1-year survivors. It is a disorder of movement and posture due to a defect or lesion of the immature brain during the pregnancy or the birth. These motor limitations appear frequently in combination with sensory and cognitive alterations generally result in great difficulties for some people with CP to manipulate objects, communicate and interact with their environment, as well as limiting their mobility. Over the last decades, instruments such as personal computers have become a popular tool to overcome some of the motor limitations and promote neural plasticity, especially during childhood. According to some estimations, 65% of youths with CP that present severely limited manipulation skills cannot use standard mice nor keyboards. Unfortunately, even when people with CP use assistive technology for computer access, they face barriers that lead to the use of typical mice, track balls or touch screens for practical reasons. Nevertheless, with the proper customization, novel developments of alternative input devices such as head mice or eye trackers can be a valuable solution for these individuals. This thesis presents a collection of novel mapping functions and facilitation algorithms that were proposed and designed to ease the act of pointing to graphical elements on the screen—the most elemental task in human-computer interaction—to individuals with CP. These developments were implemented to be used with any head mouse, although they were all tested with the ENLAZA, an inertial interface. The development of such techniques required the following approach: Developing a methodology to evaluate the performance of individuals with CP in pointing tasks, which are usually described as two sequential subtasks: navigation and targeting. Identifying the main motor abnormalities that are present in individuals with CP as well as assessing the compliance of these people with standard motor behaviour models such as Fitts’ law. Designing and validating three novel pointing facilitation techniques to be implemented in a head mouse. They were conceived for users with CP and muscle weakness that have great difficulties to maintain their heads in a stable position. The first two algorithms consist in two novel mapping functions that aim to facilitate the navigation phase, whereas the third technique is based in gravity wells and was specially developed to facilitate the selection of elements in the screen. In parallel with the development of the facilitation techniques for the interaction process, we evaluated the feasibility of use inertial technology for the control of serious videogames as a complement to traditional rehabilitation therapies of posture and balance. The experimental validation here presented confirms that this concept could be implemented in clinical practice with good results. In summary, the works here presented prove the suitability of using inertial technology for the development of an alternative pointing device—and pointing algorithms—based on movements of the head for individuals with CP and severely limited manipulation skills and new rehabilitation therapies for the improvement of posture and balance. All the contributions were validated in collaboration with several centres specialized in CP and similar disorders and users with disability recruited in those centres.La parálisis cerebral (PC) es una de las deficiencias más limitantes de la infancia, con un incidencia de 2.2 casos por cada 1000 supervivientes tras un año de vida. La PC se manifiesta principalmente como una alteración del movimiento y la postura y es consecuencia de un defecto o lesión en el cerebro inmaduro durante el embarazo o el parto. Las limitaciones motrices suelen aparecer además en compañía de alteraciones sensoriales y cognitivas, lo que provoca por lo general grandes dificultades de movilidad, de manipulación, de relación y de interacción con el entorno. En las últimas décadas, el ordenador personal se ha extendido como herramienta para la compensación de parte de estas limitaciones motoras y como medio de promoción de la neuroplasticidad, especialmente durante la infancia. Desafortunadamente, cerca de un 65% de las personas PC que son diagnosticadas con limitaciones severas de manipulación son incapaces de utilizar ratones o teclados convencionales. A veces, ni siquiera la tecnología asistencial les resulta de utilidad ya que se encuentran con impedimentos que hacen que opten por usar dispositivos tradicionales aun sin dominar su manejo. Para estas personas, los desarrollos recientes de ratones operados a través de movimientos residuales con la cabeza o la mirada podrían ser una solución válida, siempre y cuando se personalice su manejo. Esta tesis presenta un conjunto de novedosas funciones de mapeo y algoritmos de facilitaci ón que se han propuesto y diseñado con el ánimo de ayudar a personas con PC en las tareas de apuntamiento de objetos en la pantalla —las más elementales dentro de la interacción con el ordenador. Aunque todas las contribuciones se evaluaron con la interfaz inercial ENLAZA, desarrollada igualmente en nuestro grupo, podrían ser aplicadas a cualquier ratón basado en movimientos de cabeza. El desarrollo de los trabajos se resume en las siguientes tareas abordadas: Desarrollo de una metodología para la evaluación de la habilidad de usuarios con PC en tareas de apuntamiento, que se contemplan como el encadenamiento de dos sub-tareas: navegación (alcance) y selección (clic). Identificación de los tipos de alteraciones motrices presentes en individuos con PC y el grado de ajuste de éstos a modelos estándares de comportamiento motriz como puede ser la ley de Fitts. Propuesta y validación de tres técnicas de facilitación del alcance para ser implementadas en un ratón basado en movimientos de cabeza. La facilitación se ha centrado en personas que presentan debilidad muscular y dificultades para mantener la posición de la cabeza. Mientras que los dos primeros algoritmos se centraron en facilitar la navegación, el tercero tuvo como objetivo ayudar en la selección a través de una técnica basada en pozos gravitatorios de proximidad. En paralelo al desarrollo de estos algoritmos de facilitación de la interacción, evaluamos la posibilidad de utilizar tecnología inercial para el control de videojuegos en rehabilitación. Nuestra validación experimental demostró que este concepto puede implementarse en la práctica clínica como complemento a terapias tradicionales de rehabilitación de la postura y el equilibrio. Como conclusión, los trabajos desarrollados en esta tesis vienen a constatar la idoneidad de utilizar sensores inerciales para el desarrollo de interfaces de accesso alternativo al ordenador basados en movimientos residuales de la cabeza para personas con limitaciones severas de manipulación. Esta solución se complementa con algoritmos de facilitación del alcance. Por otra parte, estas soluciones tecnológicas de interfaz con el ordenador representan igualmente un complemento de terapias tradicionales de rehabilitación de la postura y el equilibrio. Todas las contribuciones se validaron en colaboración con una serie de centros especializados en parálisis cerebral y trastornos afines contando con usuarios con discapacidad reclutados en dichos centros.This thesis was completed in the Group of Neural and Cognitive Engineering (gNEC) of the CAR UPM-CSIC with the financial support of the FP7 Framework EU Research Project ABC (EU-2012-287774), the IVANPACE Project (funded by Obra Social de Caja Cantabria, 2012-2013), and the Spanish Ministry of Economy and Competitiveness in the framework of two projects: the Interplay Project (RTC-2014-1812-1) and most recently the InterAAC Project (RTC-2015-4327-1)Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Juan Manuel Belda Lois.- Secretario: María Dolores Blanco Rojas.- Vocal: Luis Fernando Sánchez Sante

    Improving the performance of input interfaces through scaling and human motor models

    Get PDF
    The performance of interfaces is affected by human factors, which vary from one person to another, and by the inherent characteristics of the various devices involved. A set of techniques has been studied in order to improve the efficiency and efficacy of input interface devices. These techniques are based on the modification of the motor scaling factor, a transformation similar to the known Control-Display ratio (CD ratio). Operation time, the accuracy of the task and user workload are the indicators used in this work. By means of models based on the various human motor behaviors, the improvement of such indicators has been demonstrated. Using some common input interface devices, a number of experiments have been carried out to evaluate the presented methodology. The results show that the overall performance of input interfaces is significantly improved by applying such methodology.Peer ReviewedPreprin

    Human-computer interaction in ubiquitous computing environments

    Full text link
    Purpose &ndash; The purpose of this paper is to explore characteristics of human-computer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Design/methodology/approach &ndash; The paper quantifies the performance of human movement based on Fitt\u27s Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings &ndash; The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications &ndash; In pervasive computing environments the challenge is to create intuitive and user-friendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user\u27s body-based interaction styles. Originality/value &ndash; The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings. <br /

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Investigating motor skill in closed-loop myoelectric hand prostheses:Through speed-accuracy trade-offs

    Get PDF

    A Hybrid Visual Control Scheme to Assist the Visually Impaired with Guided Reaching Tasks

    Get PDF
    In recent years, numerous researchers have been working towards adapting technology developed for robotic control to use in the creation of high-technology assistive devices for the visually impaired. These types of devices have been proven to help visually impaired people live with a greater degree of confidence and independence. However, most prior work has focused primarily on a single problem from mobile robotics, namely navigation in an unknown environment. In this work we address the issue of the design and performance of an assistive device application to aid the visually-impaired with a guided reaching task. The device follows an eye-in-hand, IBLM visual servoing configuration with a single camera and vibrotactile feedback to the user to direct guided tracking during the reaching task. We present a model for the system that employs a hybrid control scheme based on a Discrete Event System (DES) approach. This approach avoids significant problems inherent in the competing classical control or conventional visual servoing models for upper limb movement found in the literature. The proposed hybrid model parameterizes the partitioning of the image state-space that produces a variable size targeting window for compensatory tracking in the reaching task. The partitioning is created through the positioning of hypersurface boundaries within the state space, which when crossed trigger events that cause DES-controller state transition that enable differing control laws. A set of metrics encompassing, accuracy (DD), precision (θe\theta_{e}), and overall tracking performance (ψ\psi) are also proposed to quantity system performance so that the effect of parameter variations and alternate controller configurations can be compared. To this end, a prototype called \texttt{aiReach} was constructed and experiments were conducted testing the functional use of the system and other supporting aspects of the system behaviour using participant volunteers. Results are presented validating the system design and demonstrating effective use of a two parameter partitioning scheme that utilizes a targeting window with additional hysteresis region to filtering perturbations due to natural proprioceptive limitations for precise control of upper limb movement. Results from the experiments show that accuracy performance increased with the use of the dual parameter hysteresis target window model (0.91D10.91 \leq D \leq 1, μ(D)=0.9644\mu(D)=0.9644, σ(D)=0.0172\sigma(D)=0.0172) over the single parameter fixed window model (0.82D0.980.82 \leq D \leq 0.98, μ(D)=0.9205\mu(D)=0.9205, σ(D)=0.0297\sigma(D)=0.0297) while the precision metric, θe\theta_{e}, remained relatively unchanged. In addition, the overall tracking performance metric produces scores which correctly rank the performance of the guided reaching tasks form most difficult to easiest

    Musical Gesture through the Human Computer Interface: An Investigation using Information Theory

    Get PDF
    This study applies information theory to investigate human ability to communicate using continuous control sensors with a particular focus on informing the design of digital musical instruments. There is an active practice of building and evaluating such instruments, for instance, in the New Interfaces for Musical Expression (NIME) conference community. The fidelity of the instruments can depend on the included sensors, and although much anecdotal evidence and craft experience informs the use of these sensors, relatively little is known about the ability of humans to control them accurately. This dissertation addresses this issue and related concerns, including continuous control performance in increasing degrees-of-freedom, pursuit tracking in comparison with pointing, and the estimations of musical interface designers and researchers of human performance with continuous control sensors. The methodology used models the human-computer system as an information channel while applying concepts from information theory to performance data collected in studies of human subjects using sensing devices. These studies not only add to knowledge about human abilities, but they also inform on issues in musical mappings, ergonomics, and usability
    corecore