1,828 research outputs found

    Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data

    Get PDF
    Recently, researchers started using cognitive load in various settings, e.g., educational psychology, cognitive load theory, or human-computer interaction. Cognitive load characterizes a tasks' demand on the limited information processing capacity of the brain. The widespread adoption of eye-tracking devices led to increased attention for objectively measuring cognitive load via pupil dilation. However, this approach requires a standardized data processing routine to reliably measure cognitive load. This technical report presents CEP-Web, an open source platform to providing state of the art data processing routines for cleaning pupillary data combined with a graphical user interface, enabling the management of studies and subjects. Future developments will include the support for analyzing the cleaned data as well as support for Task-Evoked Pupillary Response (TEPR) studies

    A Self-initializing Eyebrow Tracker for Binary Switch Emulation

    Full text link
    We designed the Eyebrow-Clicker, a camera-based human computer interface system that implements a new form of binary switch. When the user raises his or her eyebrows, the binary switch is activated and a selection command is issued. The Eyebrow-Clicker thus replaces the "click" functionality of a mouse. The system initializes itself by detecting the user's eyes and eyebrows, tracks these features at frame rate, and recovers in the event of errors. The initialization uses the natural blinking of the human eye to select suitable templates for tracking. Once execution has begun, a user therefore never has to restart the program or even touch the computer. In our experiments with human-computer interaction software, the system successfully determined 93% of the time when a user raised his eyebrows.Office of Naval Research; National Science Foundation (IIS-0093367

    Virtual Keyboard Interaction Using Eye Gaze and Eye Blink

    Get PDF
    A Human-Computer Interaction (HCI) framework that is de-marked for people with serious inabilities to recreate control of a conventional machine mouse is presented. The cam based framework, screens a client's eyes and permits the client to simulate clicking the mouse utilizing deliberate blinks and winks. For clients who can control head developments and can wink with one eye while keeping their other eye obviously open, the framework permits complete utilization of a regular mouse, including moving the pointer, left and right clicking, two fold clicking, and click-and-dragging. For clients who can't wink yet can blink voluntarily the framework permits the client to perform left clicks, the most well-known and helpful mouse activity. The framework does not oblige any preparation information to recognize open eyes versus shut eyes. Eye classification is expert web amid ongoing co-operations. The framework effectively permits the clients to reproduce a tradition machine mouse. It allows users to open a document and perform typing of letters with the help of blinking of their eye. Along with framework allows users to open files and folders present on a desktop. DOI: 10.17762/ijritcc2321-8169.150710

    GaVe: A webcam-based gaze vending interface using one-point calibration

    Get PDF
    Gaze input, i.e., information input via eye of users, represents a promising method for contact-free interaction in human-machine systems. In this paper, we present the GazeVending interface (GaVe), which lets users control actions on a display with their eyes. The interface works on a regular webcam, available on most of today's laptops, and only requires a short one-point calibration before use. GaVe is designed in a hierarchical structure, presenting broad item cluster to users first and subsequently guiding them through another selection round, which allows the presentation of a large number of items. Cluster/item selection in GaVe is based on the dwell time, i.e., the time duration that users look at a given Cluster/item. A user study (N=22) was conducted to test optimal dwell time thresholds and comfortable human-to-display distances. Users' perception of the system, as well as error rates and task completion time were registered. We found that all participants were able to quickly understand and know how to interact with the interface, and showed good performance, selecting a target item within a group of 12 items in 6.76 seconds on average. We provide design guidelines for GaVe and discuss the potentials of the system

    Entering PIN codes by smooth pursuit eye movements

    Get PDF
    Despite its potential gaze interaction is still not a widely-used interaction concept. Major drawbacks as the calibration, strain of the eyes and the high number of false alarms are associated with gaze based interaction and limit its practicability for every-day human computer interaction. In this paper two experiments are described which use smooth pursuit eye movements on moving display buttons. The first experiment was conducted to extract an easy and fast interaction concept and at the same time to collect data to develop a specific but robust algorithm. In a follow-up experiment, twelve conventionally calibrated participants interacted successfully with the system. For another group of twelve people the eye tracker was not calibrated individually, but on a third person. Results show that for both groups interaction was possible without false alarms. Both groups rated the user experience of the system as positive
    • …
    corecore