2,361 research outputs found

    Micro-video display with ocular tracking and interactive voice control

    Get PDF
    In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    A practical EMG-based human-computer interface for users with motor disabilities

    Get PDF
    In line with the mission of the Assistive Technology Act of 1998 (ATA), this study proposes an integrated assistive real-time system which affirms that technology is a valuable tool that can be used to improve the lives of people with disabilities . An assistive technology device is defined by the ATA as any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities . The purpose of this study is to design and develop an alternate input device that can be used even by individuals with severe motor disabilities . This real-time system design utilizes electromyographic (EMG) biosignals from cranial muscles and electroencephalographic (EEG) biosignals from the cerebrum\u27s occipital lobe, which are transformed into controls for two-dimensional (2-D) cursor movement, the left-click (Enter) command, and an ON/OFF switch for the cursor-control functions . This HCI system classifies biosignals into mouse functions by applying amplitude thresholds and performing power spectral density (PSD) estimations on discrete windows of data. Spectral power summations are aggregated over several frequency bands between 8 and 500 Hz and then compared to produce the correct classification . The result is an affordable DSP-based system that, when combined with an on-screen keyboard, enables the user to fully operate a computer without using any extremities

    Fuzzy Mouse Cursor Control System for Computer Users with Spinal Cord Injuries

    Get PDF
    People with severe motor-impairments due to Spinal Cord Injury (SCI) or Spinal Cord Dysfunction (SCD), often experience difficulty with accurate and efficient control of pointing devices (Keates et al., 02). Usually this leads to their limited integration to society as well as limited unassisted control over the environment. The questions “How can someone with severe motor-impairments perform mouse pointer control as accurately and efficiently as an able-bodied person?” and “How can these interactions be advanced through use of Computational Intelligence (CI)?” are the driving forces behind the research described in this paper. Through this research, a novel fuzzy mouse cursor control system (FMCCS) is developed. The goal of this system is to simplify and improve efficiency of cursor control and its interactions on the computer screen by applying fuzzy logic in its decision-making to make disabled Internet users use the networked computer conveniently and easily. The FMCCS core consists of several fuzzy control functions, which define different user interactions with the system. The development of novel cursor control system is based on utilization of motor functions that are still available to most complete paraplegics, having capability of limited vision and breathing control. One of the biggest obstacles of developing human computer interfaces for disabled people focusing primarily on eyesight and breath control is user’s limited strength, stamina, and reaction time. Within the FMCCS developed in this research, these limitations are minimized through the use of a novel pneumatic input device and intelligent control algorithms for soft data analysis, fuzzy logic and user feedback assistance during operation. The new system is developed using a reliable and cheap sensory system and available computing techniques. Initial experiments with healthy and SCI subjects have clearly demonstrated benefits and promising performance of the new system: the FMCCS is accessible for people with severe SCI; it is adaptable to user specific capabilities and wishes; it is easy to learn and operate; point-to-point movement is responsive, precise and fast. The integrated sophisticated interaction features, good movement control without strain and clinical risks, as well the fact that quadriplegics, whose breathing is assisted by a respirator machine, still possess enough control to use the new system with ease, provide a promising framework for future FMCCS applications. The most motivating leverage for further FMCCS development is however, the positive feedback from persons who tested the first system prototype

    Development of an Eye-Gaze Input System With High Speed and Accuracy through Target Prediction Based on Homing Eye Movements

    Get PDF
    In this study, a method to predict a target on the basis of the trajectory of eye movements and to increase the pointing speed while maintaining high predictive accuracy is proposed. First, a predictive method based on ballistic (fast) eye movements (Approach 1) was evaluated in terms of pointing speed and predictive accuracy. In Approach 1, the so-called Midas touch problem (pointing to an unintended target) occurred, particularly when a small number of samples was used to predict a target. Therefore, to overcome the poor predictive accuracy of Approach 1, we developed a new predictive method (Approach 2) using homing (slow) eye movements rather than ballistic (fast) eye movements. Approach 2 overcame the disadvantage (inaccurate prediction) of Approach 1 by shortening the pointing time while maintaining high predictive accuracy

    Integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities

    Get PDF
    This research pursued the conceptualization, implementation, and testing of a system that allows for computer cursor control without requiring hand movement. The target user group for this system are individuals who are unable to use their hands because of spinal dysfunction or other afflictions. The system inputs consisted of electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an eye-gaze tracking (EGT) system. Each input was processed by an algorithm that produced its own cursor update information. These algorithm outputs were fused to produce an effective and efficient cursor control. Experiments were conducted to compare the performance of EMG/EGT, EGT-only, and mouse cursor controls. The experiments revealed that, although EMG/ EGT control was slower than EGT-only and mouse control, it effectively controlled the cursor without a spatial accuracy limitation and also facilitated a reliable click operation

    Human Computer Interactions for Amyotrophic Lateral Sclerosis Patients

    Get PDF

    Operating Different Displays in Military Fast Jets Using Eye Gaze Tracker

    Get PDF
    This paper investigated the use of an eye-gaze-controlled interface in a military aviation environment. We set up a flight simulator and used the gaze-controlled interface in three different configurations of displays (head down, head up, and head mounted) for military fast jets. Our studies found that the gaze-controlled interface statistically significantly increased the speed of interaction for secondary mission control tasks compared to touchscreen- and joystick-based target designation system. Finally, we tested a gaze-controlled system inside an aircraft both on the ground and in different phases of flight with military pilots. Results showed that they could undertake representative pointing and selection tasks in less than two seconds, on average

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification
    corecore