16 research outputs found

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    Effectiveness of Eye-Gaze Input System -Identification of Conditions that Assures High Pointing Accuracy and Movement Directional Effect-

    Get PDF
    The condition under which high accuracy is assured when using an eye-gaze input system was identified. It was also investigated how direction of eye movement affected the performance of an eye-gaze input system. Here, age, the arrangement of targets (vertical and horizontal), the size of a target, and the distance between adjacent rectangles were selected as experimental factors. The difference of pointing velocity between a mouse and an eyegaze input system was larger for older adults than for young adults. Thus, an eye-gaze input system was found to be effective especially for older adults. An eye-gaze input system might compensate for the declined motor functions of older adults. The pointing accuracy of an eye-gaze input system was higher in horizontal arrangement than in vertical arrangement. The distance between targets of more than 20 pixels was found to be desirable for both vertical and horizontal arrangements. For both the vertical and horizontal arrangements, the target size of more than 40pixels led to higher accuracy and faster pointing time for both young and older adults. For both age groups, it tended that the pointing time for the lower direction was longer than that for other directions

    Effectiveness of the menu selection method for eye-gaze input system - Comparison between young and older adults -

    Get PDF
    Although the opportunity of older adults to use personal computer is increased more and more, the operation of a personal computer with a mouse is very annoying for older adults who cannot move his or her arm smoothly and effectively due to declined motor function. An attempt to move a cursor by an eye-gaze input system has been carried out as one solution to this problem. Until now, a menu selection method suitable for an eye-gaze input system has not been clarified. In this study, an effective menu selection for the eye-gaze input system was identified as a basic design parameter to develop a Web browser using an eye-gaze input system. Concretely, a menu selection method, that is, improved quick glance menu selection (I-QGMS) was proposed. The effectiveness was evaluated by means of the pointing accuracy, the pointing time, and the psychological rating on usability. On the basis of the evaluation experiment, the proposed I-QGMS was found to be effective especially for older adults

    Markerless monocular tracking system for guided external eye surgery

    Full text link
    This paper presents a novel markerless monocular tracking system aimed at guiding ophthalmologists during external eye surgery. This new tracking system performs a very accurate tracking of the eye by detecting invariant points using only textures that are present in the sclera, i.e., without using traditional features like the pupil and/or cornea reflections, which remain partially or totally occluded in most surgeries. Two known algorithms that compute invariant points and correspondences between pairs of images were implemented in our system: Scalable Invariant Feature Transforms (SIFT) and Speed Up Robust Features (SURF). The results of experiments performed on phantom eyes show that, with either algorithm, the developed system tracks a sphere at a 360◦ rotation angle with an error that is lower than 0.5%. Some experiments have also been carried out on images of real eyes showing promising behavior of the system in the presence of blood or surgical instruments during real eye surgery. © 2014 Elsevier Ltd. All rights reserved.Monserrat Aranda, C.; Rupérez Moreno, MJ.; Alcañiz Raya, ML.; Mataix, J. (2014). Markerless monocular tracking system for guided external eye surgery. Computerized Medical Imaging and Graphics. 38(8):785-792. doi:10.1016/j.compmedimag.2014.08.001S78579238

    A Novel Approach to 3-D Gaze Tracking Using Stereo Cameras

    Full text link

    Evaluation of tactile feedback on dwell time progression in eye typing

    Get PDF
    Haptic feedback is known to be important in manual interfaces. However, gaze-based interactive systems usually do not involve haptic feedback. In this thesis, I investigated whether an eye typing system, which uses an eye tracker as an input device, can benefit from tactile feedback as indication of dwell time progression. The dwell time is an effective selection method in eye typing systems. It means that the user keep her/his gaze on a certain element for predetermined amount of time to active it. The tactile feedback was given by a vibrotactile actuator to the participant's finger that rested on top of the actuator. This thesis reports a comparison of three different tactile feedbacks: "Ascending" feedback, "Warning" feedback and "No dwell" feedback (i.e. no feedback given for dwell), for the dwell time progression during eye typing process. The feedbacks were compared in a within-participants experiment where each participant used the eye typing system with all feedbacks in a counterbalanced order. Two sessions were conducted to observe learning effects. The comparison methods consisted of quantitative and qualitative measures. The quantitative data included text entry speed in words per minute (WPM), error rate, keystrokes per character (KSPC), read text events (RTE) and re-focus events (RFE). RTE referred to the events when the participant moved the gaze to the text input field and RFE took place because the participant moved her/his gaze away from the key too early, thus requiring a re-focus on the same key. The qualitative data were collected from the participants' answers to questionnaires. The quantitative results reflected a learning effect between the two sessions in all the three conditions. KSPC indicated a statistically significant difference between the feedback conditions. "No dwell" feedback was related to lower KSPC than "Ascending" feedback, indicating that "Ascending" feedback led to more extra effort by the participants. The result of qualitative data did not indicate any statistically significant difference among the feedbacks and between the sessions. However, more research with different types of haptic actuators is required to validate the results

    Estimation, Detection and Tracking of Point Objects ON VIDEO

    Get PDF
    Detecting an extremely small object in a image has always been an important problem. The problem of detecting an object with circular Point Spread Function (PSF) in a focal plane array (FPA) obtained by imaging sensors has several engineering applications. In a recent work, the maximum likelihood (ML) detector was derived for image observations that were corrupted by Gaussian noise in each pixel. The proposed ML detector is optimal under the assumption that the FPA contains a circular object that has its signal intensity spread in multiple image pixels in the form of a Gaussian point spread function (PSF) with known standard deviation. The efficiency of estimation is validated by comparing it with Cramér Rao Lower Bound (CRLB). In this thesis, we develop an approach to estimate the PSF\u27s covariance, noise covariance, and total energy of the signal. In this thesis, we generalize these results to a generic (elliptical) PSF. We applied the proposed method on a real-world application, eye tracking. Eye tracking is emerging as an attractive method of human computer interaction. In the last project included in this thesis, we consider the problem of eye gaze detection based on embedded cameras such as webcams. Unlike infrared cameras, the performance of a conventional camera suffers due to fluctuations in ambient light. We developed a novel approach to improve performance. Further, we implemented our proposed ML approach to detect the center of the iris and showed it to be superior to existing approaches. Using these approaches, we demonstrate an eye gaze estimation approach using the embedded webcam of a laptop

    Haptic feedback to gaze events

    Get PDF
    Eyes are the window to the world, and most of the input from the surrounding environment is captured through the eyes. In Human-Computer Interaction too, gaze based interactions are gaining prominence, where the user’s gaze acts as an input to the system. Of late portable and inexpensive eye-tracking devices have made inroads in the market, opening up wider possibilities for interacting with a gaze. However, research on feedback to the gaze-based events is limited. This thesis proposes to study vibrotactile feedback to gaze-based interactions. This thesis presents a study conducted to evaluate different types of vibrotactile feedback and their role in response to a gaze-based event. For this study, an experimental setup was designed wherein when the user fixated the gaze on a functional object, vibrotactile feedback was provided either on the wrist or on the glasses. The study seeks to answer questions such as the helpfulness of vibrotactile feedback in identifying functional objects, user preference for the type of vibrotactile feedback, and user preference of the location of the feedback. The results of this study indicate that vibrotactile feedback was an important factor in identifying the functional object. The preference for the type of vibrotactile feedback was somewhat inconclusive as there were wide variations among the users over the type of vibrotactile feedback. The personal preference largely influenced the choice of location for receiving the feedback

    Oculomotor responses and 3D displays

    Get PDF
    This thesis investigated some of the eye movement factors related to the development and use of eye pointing devices with three dimensional displays (stereoscopic and linear perspective). In order for eye pointing to be used as a successful device for input-control of a 3D display it is necessary to characterise the accuracy and speed with which the binocular point of foveation can locate a particular point in 3D space. Linear perspective was found to be insufficient to elicit a change in the depth of the binocular point of fixation except under optimal conditions (monocular viewing, accommodative loop open and constant display paradigm). Comparison of the oculomotor responses made between a stereoscopic 'virtual' and a 'real' display showed there were no differences with regards to target fixational accuracy. With one exception, subjects showed the same degree of fixational accuracy with respect to target direction and depth. However, close target proximity (in terms of direction) affected the accuracy of fixation with respect to depth (but not direction). No differences were found between fixational accuracy of large and small targets under either display conditions. The visual conditions eliciting fast changes in the location of the binocular point of foveation, i.e. saccade disconjugacy, were investigated. Target-directed saccade disconjugacy was confirmed, in some cases, between targets presented at different depths on a stereoscopic display. However, in general the direction of saccade disconjugacy was best predicted by the horizontal direction of the target. Leftward saccade disconjugacy was more divergent than rightward. This asymmetry was overlaid on a disconjugacy response, which when considered in relative terms, was appropriated for the level of vergence demand. Linear perspective depth cues did not elicit target-directed disconjugate saccades

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments
    corecore