377 research outputs found

    Virtual Keyboard Interaction Using Eye Gaze and Eye Blink

    Get PDF
    A Human-Computer Interaction (HCI) framework that is de-marked for people with serious inabilities to recreate control of a conventional machine mouse is presented. The cam based framework, screens a client's eyes and permits the client to simulate clicking the mouse utilizing deliberate blinks and winks. For clients who can control head developments and can wink with one eye while keeping their other eye obviously open, the framework permits complete utilization of a regular mouse, including moving the pointer, left and right clicking, two fold clicking, and click-and-dragging. For clients who can't wink yet can blink voluntarily the framework permits the client to perform left clicks, the most well-known and helpful mouse activity. The framework does not oblige any preparation information to recognize open eyes versus shut eyes. Eye classification is expert web amid ongoing co-operations. The framework effectively permits the clients to reproduce a tradition machine mouse. It allows users to open a document and perform typing of letters with the help of blinking of their eye. Along with framework allows users to open files and folders present on a desktop. DOI: 10.17762/ijritcc2321-8169.150710

    Dwell-free input methods for people with motor impairments

    Full text link
    Millions of individuals affected by disorders or injuries that cause severe motor impairments have difficulty performing compound manipulations using traditional input devices. This thesis first explores how effective various assistive technologies are for people with motor impairments. The following questions are studied: (1) What activities are performed? (2) What tools are used to support these activities? (3) What are the advantages and limitations of these tools? (4) How do users learn about and choose assistive technologies? (5) Why do users adopt or abandon certain tools? A qualitative study of fifteen people with motor impairments indicates that users have strong needs for efficient text entry and communication tools that are not met by existing technologies. To address these needs, this thesis proposes three dwell-free input methods, designed to improve the efficacy of target selection and text entry based on eye-tracking and head-tracking systems. They yield: (1) the Target Reverse Crossing selection mechanism, (2) the EyeSwipe eye-typing interface, and (3) the HGaze Typing interface. With Target Reverse Crossing, a user moves the cursor into a target and reverses over a goal to select it. This mechanism is significantly more efficient than dwell-time selection. Target Reverse Crossing is then adapted in EyeSwipe to delineate the start and end of a word that is eye-typed with a gaze path connecting the intermediate characters (as with traditional gesture typing). When compared with a dwell-based virtual keyboard, EyeSwipe affords higher text entry rates and a more comfortable interaction. Finally, HGaze Typing adds head gestures to gaze-path-based text entry to enable simple and explicit command activations. Results from a user study demonstrate that HGaze Typing has better performance and user satisfaction than a dwell-time method

    HMOS: Head Control Mouse Person with Disability

    Get PDF
    This paper presents an idea to build a human machine interface for disable persons. The purposed idea is very economical and useful for those disable persons who cannot use their hands to control computers.The main focus is to control the mouse through their head movements by using head-tilt sensor and air blow sensor. The system uses dual axis accelerometer based tilt sensor for detecting the movement of the head which is mounted on the headset and clicking of the mouse is activated by the 2 air blow sensor which are placed near the mouth to detect the left and right click of the mouse from the effect of the air blow in to the sensors. Since the device relies only on the user?s head and air blow, so it can be used easily without requiring too much energy neither in the head movement nor in the air blow for clicking. This system encourage the disable person to start their independent professional life

    Intelligent Interfaces to Empower People with Disabilities

    Full text link
    Severe motion impairments can result from non-progressive disorders, such as cerebral palsy, or degenerative neurological diseases, such as Amyotrophic Lateral Sclerosis (ALS), Multiple Sclerosis (MS), or muscular dystrophy (MD). They can be due to traumatic brain injuries, for example, due to a traffic accident, or to brainste

    Applications of the electric potential sensor for healthcare and assistive technologies

    Get PDF
    The work discussed in this thesis explores the possibility of employing the Electric Potential Sensor for use in healthcare and assistive technology applications with the same and in some cases better degrees of accuracy than those of conventional technologies. The Electric Potential Sensor is a generic and versatile sensing technology capable of working in both contact and non-contact (remote) modes. New versions of the active sensor were developed for specific surface electrophysiological signal measurements. The requirements in terms of frequency range, electrode size and gain varied with the type of signal measured for each application. Real-time applications based on electrooculography, electroretinography and electromyography are discussed, as well as an application based on human movement. A three sensor electrooculography eye tracking system was developed which is of interest to eye controlled assistive technologies. The system described achieved an accuracy at least as good as conventional wet gel electrodes for both horizontal and vertical eye movements. Surface recording of the electroretinogram, used to monitor eye health and diagnose degenerative diseases of the retina, was achieved and correlated with both corneal fibre and wet gel surface electrodes. The main signal components of electromyography lie in a higher bandwidth and surface signals of the deltoid muscle were recorded over the course of rehabilitation of a subject with an injured arm. Surface electromyography signals of the bicep were also recorded and correlated with the joint dynamics of the elbow. A related non-contact application of interest to assistive technologies was also developed. Hand movement within a defined area was mapped and used to control a mouse cursor and a predictive text interface

    Multi-modal post-editing of machine translation

    Get PDF
    As MT quality continues to improve, more and more translators switch from traditional translation from scratch to PE of MT output, which has been shown to save time and reduce errors. Instead of mainly generating text, translators are now asked to correct errors within otherwise helpful translation proposals, where repetitive MT errors make the process tiresome, while hard-to-spot errors make PE a cognitively demanding activity. Our contribution is three-fold: first, we explore whether interaction modalities other than mouse and keyboard could well support PE by creating and testing the MMPE translation environment. MMPE allows translators to cross out or hand-write text, drag and drop words for reordering, use spoken commands or hand gestures to manipulate text, or to combine any of these input modalities. Second, our interviews revealed that translators see value in automatically receiving additional translation support when a high CL is detected during PE. We therefore developed a sensor framework using a wide range of physiological and behavioral data to estimate perceived CL and tested it in three studies, showing that multi-modal, eye, heart, and skin measures can be used to make translation environments cognition-aware. Third, we present two multi-encoder Transformer architectures for APE and discuss how these can adapt MT output to a domain and thereby avoid correcting repetitive MT errors.Angesichts der stetig steigenden Qualität maschineller Übersetzungssysteme (MÜ) post-editieren (PE) immer mehr Übersetzer die MÜ-Ausgabe, was im Vergleich zur herkömmlichen Übersetzung Zeit spart und Fehler reduziert. Anstatt primär Text zu generieren, müssen Übersetzer nun Fehler in ansonsten hilfreichen Übersetzungsvorschlägen korrigieren. Dennoch bleibt die Arbeit durch wiederkehrende MÜ-Fehler mühsam und schwer zu erkennende Fehler fordern die Übersetzer kognitiv. Wir tragen auf drei Ebenen zur Verbesserung des PE bei: Erstens untersuchen wir, ob andere Interaktionsmodalitäten als Maus und Tastatur das PE unterstützen können, indem wir die Übersetzungsumgebung MMPE entwickeln und testen. MMPE ermöglicht es, Text handschriftlich, per Sprache oder über Handgesten zu verändern, Wörter per Drag & Drop neu anzuordnen oder all diese Eingabemodalitäten zu kombinieren. Zweitens stellen wir ein Sensor-Framework vor, das eine Vielzahl physiologischer und verhaltensbezogener Messwerte verwendet, um die kognitive Last (KL) abzuschätzen. In drei Studien konnten wir zeigen, dass multimodale Messung von Augen-, Herz- und Hautmerkmalen verwendet werden kann, um Übersetzungsumgebungen an die KL der Übersetzer anzupassen. Drittens stellen wir zwei Multi-Encoder-Transformer-Architekturen für das automatische Post-Editieren (APE) vor und erörtern, wie diese die MÜ-Ausgabe an eine Domäne anpassen und dadurch die Korrektur von sich wiederholenden MÜ-Fehlern vermeiden können.Deutsche Forschungsgemeinschaft (DFG), Projekt MMP

    Is there Joy Beyond the Joystick?: Immersive Potential of Brain-Computer Interfaces

    Get PDF
    Immersion, the state of being fully engaged in one\u27s current operation, is a descriptor commonly used to appraise user experience in computer games and software applications. As the use of brain-computer interfaces (BCIs) begins to expand into the consumer sphere, questions arise concerning the ability of BCIs to modulate user immersion. This study employed a computer game to examine the effect of a consumer-grade BCI (the Emotiv EPOC) on immersion. In doing so, this study also explored the relationship between BCI usability and immersion levels. An experiment with twenty-seven participants showed that users were significantly more immersed when controlling the testing game with a BCI in comparison to traditional control methods. The results suggest that increased immersion levels may be caused by the challenging nature of BCI control rather than the BCI\u27s ability to directly translate user intent
    corecore