122 research outputs found

    AirMouse: Finger Gesture for 2D and 3D Interaction

    Get PDF
    International audienceThis paper presents AirMouse, a new interaction technique based on finger gestures above the laptop's keyboard. At a reasonably low cost, the technique can replace the traditional methods for pointing in two or three dimensions. Moreover, the device-switching time is reduced and no additional surface than the one for the laptop is needed. In a 2D pointing evaluation, a vision-based implementation of the technique is compared with commonly used devices. The same implementation is also compared with the two most commonly used 3D pointing devices. The two user experiments show the benefits of the polyvalent technique: it is easy to learn, intuitive and efficient by providing good performance. In particular, our conducted experiment shows that performance with AirMouse is promising in comparison with a touchpad and with dedicated 3D pointing devices. It shows that AirMouse offers better performance as compared to FlowMouse, a previous solution using fingers above the keyboard

    Performance, Characteristics, and Error Rates of Cursor Control Devices for Aircraft Cockpit Interaction

    Get PDF
    This document is the Accepted Manuscript version of the following article: Peter R. Thomas, 'Performance, Characteristics, and Error Rates of Cursor Control Devices for Aircraft Cockpit Interaction', International Journal of Human-Computer Studies, Vol. 109: 41-53, available online 31 August 2017. Under embargo. Embargo end date: 31 August 2018. Published by Elsevier. © 2017 Elsevier Ltd. All rights reserved.This paper provides a comparative performance analysis of a hands-on-throttle-and-stick (HOTAS) cursor control device (CCD) with other suitable CCDs for an aircraft cockpit: an isotonic thumbstick, a trackpad, a trackball, and touchscreen input. The performance and characteristics of these five CCDs were investigated in terms of throughput, movement accuracy, and error rate using the ISO 9241-9 standard task. Results show statistically significant differences (p < 0.001) between three groupings of the devices, with the HOTAS having the lowest throughput (0.7 bits/s) and the touchscreen the highest (3.7 bits/s). Errors for all devices were shown to increase with decreasing target size (p < 0.001) and, to a lesser effect, increasing target distance (p < 0.01). The trackpad was found to be the most accurate of the five devices, being significantly better than the HOTAS fingerstick and touchscreen (p < 0.05) with the touchscreen performing poorly on selecting smaller targets (p < 0.05). These results would be useful to cockpit human-machine interface designers and provides evidence of the need to move away from, or significantly augment the capabilities of, this type of HOTAS CCD in order to improve pilot task throughput in increasingly data-rich cockpits.Peer reviewedFinal Accepted Versio

    Testing pointing device performance and user assessment with the ISO 9241, Part 9 standard

    Full text link

    Augmenting User Interfaces with Haptic Feedback

    Get PDF
    Computer assistive technologies have developed considerably over the past decades. Advances in computer software and hardware have provided motion-impaired operators with much greater access to computer interfaces. For people with motion impairments, the main di�culty in the communication process is the input of data into the system. For example, the use of a mouse or a keyboard demands a high level of dexterity and accuracy. Traditional input devices are designed for able-bodied users and often do not meet the needs of someone with disabilities. As the key feature of most graphical user interfaces (GUIs) is to point-and-click with a cursor this can make a computer inaccessible for many people. Human-computer interaction (HCI) is an important area of research that aims to improve communication between humans and machines. Previous studies have identi�ed haptics as a useful method for improving computer access. However, traditional haptic techniques su�er from a number of shortcomings that have hindered their inclusion with real world software. The focus of this thesis is to develop haptic rendering algorithms that will permit motion-impaired operators to use haptic assistance with existing graphical user interfaces. The main goal is to improve interaction by reducing error rates and improving targeting times. A number of novel haptic assistive techniques are presented that utilise the three degrees-of-freedom (3DOF) capabilities of modern haptic devices to produce assistance that is designed speci�- cally for motion-impaired computer users. To evaluate the e�ectiveness of the new techniques a series of point-and-click experiments were undertaken in parallel with cursor analysis to compare the levels of performance. The task required the operator to produce a prede�ned sentence on the densely populated Windows on-screen keyboard (OSK). The results of the study prove that higher performance levels can be i ii achieved using techniques that are less constricting than traditional assistance

    Interactions under the desk: a characterisation of foot movements for input in a seated position

    Get PDF
    We characterise foot movements as input for seated users. First, we built unconstrained foot pointing performance models in a seated desktop setting using ISO 9241-9-compliant Fitts’s Law tasks. Second, we evaluated the effect of the foot and direction in one-dimensional tasks, finding no effect of the foot used, but a significant effect of the direction in which targets are distributed. Third, we compared one foot against two feet to control two variables, finding that while one foot is better suited for tasks with a spatial representation that matches its movement, there is little difference between the techniques when it does not. Fourth, we analysed the overhead caused by introducing a feet-controlled variable in a mouse task, finding the feet to be comparable to the scroll wheel. Our results show the feet are an effective method of enhancing our interaction with desktop systems and derive a series of design guidelines

    Methods and metrics for the improvement of the interaction and the rehabilitation of cerebral palsy through inertial technology

    Get PDF
    Cerebral palsy (CP) is one of the most limiting disabilities in childhood, with 2.2 cases per 1000 1-year survivors. It is a disorder of movement and posture due to a defect or lesion of the immature brain during the pregnancy or the birth. These motor limitations appear frequently in combination with sensory and cognitive alterations generally result in great difficulties for some people with CP to manipulate objects, communicate and interact with their environment, as well as limiting their mobility. Over the last decades, instruments such as personal computers have become a popular tool to overcome some of the motor limitations and promote neural plasticity, especially during childhood. According to some estimations, 65% of youths with CP that present severely limited manipulation skills cannot use standard mice nor keyboards. Unfortunately, even when people with CP use assistive technology for computer access, they face barriers that lead to the use of typical mice, track balls or touch screens for practical reasons. Nevertheless, with the proper customization, novel developments of alternative input devices such as head mice or eye trackers can be a valuable solution for these individuals. This thesis presents a collection of novel mapping functions and facilitation algorithms that were proposed and designed to ease the act of pointing to graphical elements on the screen—the most elemental task in human-computer interaction—to individuals with CP. These developments were implemented to be used with any head mouse, although they were all tested with the ENLAZA, an inertial interface. The development of such techniques required the following approach: Developing a methodology to evaluate the performance of individuals with CP in pointing tasks, which are usually described as two sequential subtasks: navigation and targeting. Identifying the main motor abnormalities that are present in individuals with CP as well as assessing the compliance of these people with standard motor behaviour models such as Fitts’ law. Designing and validating three novel pointing facilitation techniques to be implemented in a head mouse. They were conceived for users with CP and muscle weakness that have great difficulties to maintain their heads in a stable position. The first two algorithms consist in two novel mapping functions that aim to facilitate the navigation phase, whereas the third technique is based in gravity wells and was specially developed to facilitate the selection of elements in the screen. In parallel with the development of the facilitation techniques for the interaction process, we evaluated the feasibility of use inertial technology for the control of serious videogames as a complement to traditional rehabilitation therapies of posture and balance. The experimental validation here presented confirms that this concept could be implemented in clinical practice with good results. In summary, the works here presented prove the suitability of using inertial technology for the development of an alternative pointing device—and pointing algorithms—based on movements of the head for individuals with CP and severely limited manipulation skills and new rehabilitation therapies for the improvement of posture and balance. All the contributions were validated in collaboration with several centres specialized in CP and similar disorders and users with disability recruited in those centres.La parálisis cerebral (PC) es una de las deficiencias más limitantes de la infancia, con un incidencia de 2.2 casos por cada 1000 supervivientes tras un año de vida. La PC se manifiesta principalmente como una alteración del movimiento y la postura y es consecuencia de un defecto o lesión en el cerebro inmaduro durante el embarazo o el parto. Las limitaciones motrices suelen aparecer además en compañía de alteraciones sensoriales y cognitivas, lo que provoca por lo general grandes dificultades de movilidad, de manipulación, de relación y de interacción con el entorno. En las últimas décadas, el ordenador personal se ha extendido como herramienta para la compensación de parte de estas limitaciones motoras y como medio de promoción de la neuroplasticidad, especialmente durante la infancia. Desafortunadamente, cerca de un 65% de las personas PC que son diagnosticadas con limitaciones severas de manipulación son incapaces de utilizar ratones o teclados convencionales. A veces, ni siquiera la tecnología asistencial les resulta de utilidad ya que se encuentran con impedimentos que hacen que opten por usar dispositivos tradicionales aun sin dominar su manejo. Para estas personas, los desarrollos recientes de ratones operados a través de movimientos residuales con la cabeza o la mirada podrían ser una solución válida, siempre y cuando se personalice su manejo. Esta tesis presenta un conjunto de novedosas funciones de mapeo y algoritmos de facilitaci ón que se han propuesto y diseñado con el ánimo de ayudar a personas con PC en las tareas de apuntamiento de objetos en la pantalla —las más elementales dentro de la interacción con el ordenador. Aunque todas las contribuciones se evaluaron con la interfaz inercial ENLAZA, desarrollada igualmente en nuestro grupo, podrían ser aplicadas a cualquier ratón basado en movimientos de cabeza. El desarrollo de los trabajos se resume en las siguientes tareas abordadas: Desarrollo de una metodología para la evaluación de la habilidad de usuarios con PC en tareas de apuntamiento, que se contemplan como el encadenamiento de dos sub-tareas: navegación (alcance) y selección (clic). Identificación de los tipos de alteraciones motrices presentes en individuos con PC y el grado de ajuste de éstos a modelos estándares de comportamiento motriz como puede ser la ley de Fitts. Propuesta y validación de tres técnicas de facilitación del alcance para ser implementadas en un ratón basado en movimientos de cabeza. La facilitación se ha centrado en personas que presentan debilidad muscular y dificultades para mantener la posición de la cabeza. Mientras que los dos primeros algoritmos se centraron en facilitar la navegación, el tercero tuvo como objetivo ayudar en la selección a través de una técnica basada en pozos gravitatorios de proximidad. En paralelo al desarrollo de estos algoritmos de facilitación de la interacción, evaluamos la posibilidad de utilizar tecnología inercial para el control de videojuegos en rehabilitación. Nuestra validación experimental demostró que este concepto puede implementarse en la práctica clínica como complemento a terapias tradicionales de rehabilitación de la postura y el equilibrio. Como conclusión, los trabajos desarrollados en esta tesis vienen a constatar la idoneidad de utilizar sensores inerciales para el desarrollo de interfaces de accesso alternativo al ordenador basados en movimientos residuales de la cabeza para personas con limitaciones severas de manipulación. Esta solución se complementa con algoritmos de facilitación del alcance. Por otra parte, estas soluciones tecnológicas de interfaz con el ordenador representan igualmente un complemento de terapias tradicionales de rehabilitación de la postura y el equilibrio. Todas las contribuciones se validaron en colaboración con una serie de centros especializados en parálisis cerebral y trastornos afines contando con usuarios con discapacidad reclutados en dichos centros.This thesis was completed in the Group of Neural and Cognitive Engineering (gNEC) of the CAR UPM-CSIC with the financial support of the FP7 Framework EU Research Project ABC (EU-2012-287774), the IVANPACE Project (funded by Obra Social de Caja Cantabria, 2012-2013), and the Spanish Ministry of Economy and Competitiveness in the framework of two projects: the Interplay Project (RTC-2014-1812-1) and most recently the InterAAC Project (RTC-2015-4327-1)Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Juan Manuel Belda Lois.- Secretario: María Dolores Blanco Rojas.- Vocal: Luis Fernando Sánchez Sante

    CIRCLING INTERFACE: AN ALTERNATIVE INTERACTION METHOD FOR ON-SCREEN OBJECT MANIPULATION

    Get PDF
    An alternative interaction method, called the circling interface, was developed and evaluated for individuals with disabilities who find it difficult or impossible to consistently and efficiently perform pointing operations involving the left and right mouse buttons. The circling interface is a gesture-based interaction technique. To specify a target of interest, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. Empirical evaluations were conducted with human subjects from three different groups (individuals without disability, individuals with spinal cord injury, and individuals with cerebral palsy), comparing each group's performance on pointing tasks with the circling interface to performance on the same tasks when using a mouse button or dwell-clicking software. Across all three groups, the circling interface was faster than the dwelling interface (although the difference was not statistically significant). For the single-click operation, the circling interface was slower than dwell selection, but for both double-click and drag-and-drop operations, the circling interface was faster. In terms of performance accuracy, the results were mixed: for able-bodied subjects circling was more accurate than dwelling, for subjects with SCI dwelling was more accurate than circling, and for subjects with CP there was no difference. However, if errors caused by circling on an area with no target or by ignoring circles that are too small or too fast were automatically corrected by the circling interface, the performance accuracy of the circling interface would significantly outperform dwell selection. This suggests that the circling interface can be used in conjunction with existing pointing techniques and this combined approach may provide more effective mouse use for people with pointing problems. Consequently, the circling interface can improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. It is also expected to be useful for both computer access and augmentative communication software

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    Using pressure input and thermal feedback to broaden haptic interaction with mobile devices

    Get PDF
    Pressure input and thermal feedback are two under-researched aspects of touch in mobile human-computer interfaces. Pressure input could provide a wide, expressive range of continuous input for mobile devices. Thermal stimulation could provide an alternative means of conveying information non-visually. This thesis research investigated 1) how accurate pressure-based input on mobile devices could be when the user was walking and provided with only audio feedback and 2) what forms of thermal stimulation are both salient and comfortable and so could be used to design structured thermal feedback for conveying multi-dimensional information. The first experiment tested control of pressure on a mobile device when sitting and using audio feedback. Targeting accuracy was >= 85% when maintaining 4-6 levels of pressure across 3.5 Newtons, using only audio feedback and a Dwell selection technique. Two further experiments tested control of pressure-based input when walking and found accuracy was very high (>= 97%) even when walking and using only audio feedback, when using a rate-based input method. A fourth experiment tested how well each digit of one hand could apply pressure to a mobile phone individually and in combination with others. Each digit could apply pressure highly accurately, but not equally so, while some performed better in combination than alone. 2- or 3-digit combinations were more precise than 4- or 5-digit combinations. Experiment 5 compared one-handed, multi-digit pressure input using all 5 digits to traditional two-handed multitouch gestures for a combined zooming and rotating map task. Results showed comparable performance, with multitouch being ~1% more accurate but pressure input being ~0.5sec faster, overall. Two experiments, one when sitting indoors and one when walking indoors tested how salient and subjectively comfortable/intense various forms of thermal stimulation were. Faster or larger changes were more salient, faster to detect and less comfortable and cold changes were more salient and faster to detect than warm changes. The two final studies designed two-dimensional structured ‘thermal icons’ that could convey two pieces of information. When indoors, icons were correctly identified with 83% accuracy. When outdoors, accuracy dropped to 69% when sitting and 61% when walking. This thesis provides the first detailed study of how precisely pressure can be applied to mobile devices when walking and provided with audio feedback and the first systematic study of how to design thermal feedback for interaction with mobile devices in mobile environments
    corecore