26 research outputs found

    Human Performance Modeling For Two-Dimensional Dwell-Based Eye Pointing

    Get PDF
    Recently, Zhang et al. (2010) proposed an effective performance model for dwell-based eye pointing. However, their model was based on a specific circular target condition, without the ability to predict the performance of acquiring conventional rectangular targets. Thus, the applicability of such a model is limited. In this paper, we extend their one-dimensional model to two-dimensional (2D) target conditions. Carrying out two experiments, we have evaluated the abilities of different model candidates to find out the most appropriate one. The new index of difficulty we redefine for 2D eye pointing (IDeye) can properly reflect the asymmetrical impact of target width and height, which the later exceeds the former, and consequently the IDeyemodel can accurately predict the performance for 2D targets. Importantly, we also find that this asymmetry still holds for varying movement directions. According to the results of our study, we provide useful implications and recommendations for gaze-based interactions

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    Assisted Interaction for Improving Web Accessibility: An Approach Driven and Tested by Userswith Disabilities

    Get PDF
    148 p.Un porcentaje cada vez mayor de la población mundial depende de la Web para trabajar, socializar, opara informarse entre otras muchas actividades. Los beneficios de la Web son todavía más cruciales paralas personas con discapacidades ya que les permite realizar un sinfín de tareas que en el mundo físico lesestán restringidas debido distintas barreras de accesibilidad. A pesar de sus ventajas, la mayoría depáginas web suelen ignoran las necesidades especiales de las personas con discapacidad, e incluyen undiseño único para todos los usuarios. Existen diversos métodos para combatir este problema, como porejemplo los sistemas de ¿transcoding¿, que transforman automáticamente páginas web inaccesibles enaccesibles. Para mejorar la accesibilidad web a grupos específicos de personas, estos métodos requiereninformación sobre las técnicas de adaptación más adecuadas que deben aplicarse.En esta tesis se han realizado una serie de estudios sobre la idoneidad de diversas técnicas de adaptaciónpara mejorar la navegación web para dos grupos diferentes de personas con discapacidad: personas conmovilidad reducida en miembros superiores y personas con baja visión. Basado en revisionesbibliográficas y estudios observacionales, se han desarrollado diferentes adaptaciones de interfaces web ytécnicas alternativas de interacción, que posteriormente han sido evaluadas a lo largo de varios estudioscon usuarios con necesidades especiales. Mediante análisis cualitativos y cuantitativos del rendimiento yla satisfacción de los participantes, se han evaluado diversas adaptaciones de interfaz y métodosalternativos de interacción. Los resultados han demostrado que las técnicas probadas mejoran el acceso ala Web y que los beneficios varían según la tecnología asistiva usada para acceder al ordenador

    Operant EEG-based BMI: Learning and consolidating device control with brain activity

    Get PDF
    "Whether you are reading this thesis on paper or screen, it is easy to take for granted all the highly specialized movements you are doing at this very moment just to go through each page. Just to turn a page, you have to reach for and grasp it, turn it and let go at the precise moment not to rip it.(...)

    The development and evaluation of gaze selection techniques

    Get PDF
    Eye gaze interaction enables users to interact with computers using their eyes. A wide variety of eye gaze interaction techniques have been developed to support this type of interaction. Gaze selection techniques, a class of eye gaze interaction techniques which support target selection, are the subject of this research. Researchers developing these techniques face a number of challenges. The most significant challenge is the limited accuracy of eye tracking equipment (due to the properties of the human eye). The design of gaze selection techniques is dominated by this constraint. Despite decades of research, existing techniques are still significantly less accurate than the mouse. A recently developed technique, EyePoint, represents the state of the art in gaze selection techniques. EyePoint combines gaze input with keyboard input. Evaluation results for this technique are encouraging, but accuracy is still a concern. Early trigger errors, resulting from users triggering a selection before looking at the intended target, were found to be the most commonly occurring errors for this technique. The primary goal of this research was to improve the usability of gaze selection techniques. In order to achieve this goal, novel gaze selection techniques were developed. New techniques were developed by combining elements of existing techniques in novel ways. Seven novel gaze selection techniques were developed. Three of these techniques were selected for evaluation. A software framework was developed for implementing and evaluating gaze selection techniques. This framework was used to implement the gaze selection techniques developed during this research. Implementing and evaluating all of the techniques using a common framework ensured consistency when comparing the techniques. The novel techniques which were developed were evaluated against EyePoint and the mouse using the framework. The three novel techniques evaluated were named TargetPoint, StaggerPoint and ScanPoint. TargetPoint combines motor space expansion with a visual feedback highlight whereas the StaggerPoint and TargetPoint designs explore novel approaches to target selection disambiguation. A usability evaluation of the three novel techniques alongside EyePoint and the mouse revealed some interesting trends. TargetPoint was found to be more usable and accurate than EyePoint. This novel technique also proved more popular with test participants. One aspect of TargetPoint which proved particularly popular was the visual feedback highlight, a feature which was found to be a more effective method of combating early trigger errors than existing approaches. StaggerPoint was more efficient than EyePoint, but was less effective and satisfying. ScanPoint was the least popular technique. The benefits of providing a visual feedback highlight and test participants' positive views thereof contradict views expressed in existing research regarding the usability of visual feedback. These results have implications for the design of future gaze selection techniques. A set of design principles was developed for designing new gaze selection techniques. The designers of gaze selection techniques can benefit from these design principles by applying them to their technique

    Autonomous Navigation of Mobile Robots: Marker-based Localization System and On-line Path

    Get PDF
    Traditional wheelchairs are controlled mainly by joystick, which is not suitable solution with major disabilities. Current thesis aiming to create a human-machine interface and create a software, which performs indoor autonomous navigation of the commercial wheelchair RoboEye, developed at the Measurements Instrumentations Robotic Laboratory at the University of Trento in collaboration with Robosense and Xtrensa,. RoboEye is an intelligent wheelchair that aims to support people by providing independence and autonomy of movement, affected by serious mobility problems from impairing pathologies (for example ALS – amyotrophic lateral sclerosis). This thesis is divided into two main parts – human machine interface creation plus integration of existing services into developed solution, and performing possible solution how given wheelchair can navigate manually utilizing eye-tracking technologies, TOF cameras, odometric localization and Aruco markers. Developed interface supports manual, semi-autonomous and autonomous navigation. In addition to that following user experience specific for eye-tracking devices and people with major disabilities. Application delevoped on Unity 3D software using C# script following state-machine approach with multiple scenes and components. In the current master thesis, suggested solution satisfies user’s need to navigate hands-free, as less tiring as possible. Moreover, user can choose the destination point from defined in advance points of interests and reach it with no further input needed. User interface is intuitive and clear for experienced and inexperienced users. The user can choose UI’s icons image, scale and font size. Software performs in a state machine module, which is tested among users using test cases. Path planning routine is solved using Dijkstra approach and proved to be efficient

    Eye movement, memory and tempo in the sight reading of keyboard music

    Get PDF

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    ECoG correlates of visuomotor transformation, neural plasticity, and application to a force-based brain computer interface

    Get PDF
    Electrocorticography: ECoG) has gained increased notoriety over the past decade as a possible recording modality for Brain-Computer Interface: BCI) applications that offers a balance of minimal invasiveness to the patient in addition to robust spectral information over time. More recently, the scale of ECoG devices has begun to shrink to the order of micrometer diameter contacts and millimeter spacings with the intent of extracting more independent signals for BCI control within less cortical real-estate. However, most control signals to date, whether within the field of ECoG or any of the more seasoned recording techniques, have translated their control signals to kinematic control parameters: i.e. position or velocity of an object) which may not be practical for certain BCI applications such as functional neuromuscular stimulation: FNS). Thus, the purpose of this dissertation was to present a novel application of ECoG signals to a force-based control algorithm and address its feasibility for such a BCI system. Micro-ECoG arrays constructed from thin-film polyimide were implanted epidurally over areas spanning premotor, primary motor, and parietal cortical areas of two monkeys: three hemispheres, three arrays). Monkeys first learned to perform a classic center-out task using a brain signal-to-velocity mapping for control of a computer cursor. The BCI algorithm utilized day-to-day adaptation of the decoding model to match the task intention of the monkeys with no need for pre-screeening of movement-related ECoG signals. Using this strategy, subjects showed notable 2-D task profiency and increased task-related modulation of ECoG features within five training sessions. After fixing the last model trained for velocity control of the cursor, the monkeys then utilized this decoding model to control the acceleration of the cursor in the same center-out task. Cursor movement profiles under this mapping paralleled those demonstrated using velocity control, and neural control signal profiles revealed the monkeys actively accelerated and decelerated the cursor within a limited time window: 1-1.5 seconds). The fixed BCI decoding model was recast once again to control the force on a virtual cursor in a novel mass-grab task. This task required targets not only to reach to peripheral targets but also account for an additional virtual mass as they grabbed each target and moved it to a second target location in the presence of the external force of gravity. Examination of the ensemble control signals showed neural adaptation to variations in the perceived mass of the target as well as the presence or absence of gravity. Finally, short rest periods were interleaved within blocks of each task type to elucidate differences between active BCI intention and rest. Using a post-hoc state-decoder model, periods of active BCI task control could be distinguished from periods of rest with a very high degree of accuracy: ~99%). Taken together, the results from these experiments present a first step toward the design of a dynamics-based BCI system suitable for FNS applications as well as a framework for implementation of an asyncrhonous ECoG BCI
    corecore