975 research outputs found

    SymbolDesign: A User-centered Method to Design Pen-based Interfaces and Extend the Functionality of Pointer Input Devices

    Full text link
    A method called "SymbolDesign" is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.National Science Foundation (IIS-0093367, IIS-0308213, IIS-0329009, EIA-0202067

    Pervasive and standalone computing: The perceptual effects of variable multimedia quality.

    Get PDF
    The introduction of multimedia on pervasive and mobile communication devices raises a number of perceptual quality issues, however, limited work has been done examining the 3-way interaction between use of equipment, quality of perception and quality of service. Our work measures levels of informational transfer (objective) and user satisfaction (subjective)when users are presented with multimedia video clips at three different frame rates, using four different display devices, simulating variation in participant mobility. Our results will show that variation in frame-rate does not impact a user’s level of information assimilation, however, does impact a users’ perception of multimedia video ‘quality’. Additionally, increased visual immersion can be used to increase transfer of video information, but can negatively affect the users’ perception of ‘quality’. Finally, we illustrate the significant affect of clip-content on the transfer of video, audio and textual information, placing into doubt the use of purely objective quality definitions when considering multimedia presentations

    A Hybrid Gaze Pointer with Voice Control

    Get PDF
    Accessibility in technology has been a challenge since the beginning of the 1800s. Starting with building typewriters for the blind by Pellegrino Turri to the on-screen keyboard built by Microsoft, there have been several advancements towards assistive technologies. The basic tools necessary for anyone to operate a computer are to be able to navigate the device, input information, and perceive the output. All these three categories have been undergoing tremendous advancements over the years. Especially, with the internet boom, it has now become a necessity to point onto a computer screen. This has somewhat attracted research into this particular area. However, these advancements still have a lot of room for improvement for better accuracy and reduced latency. This project focuses on building a low-cost application to track eye gaze which in turn can be used to solve the navigation problem. The application is targeted to be helpful to people with motor disabilities caused by medical conditions such as Carpel Tunnel Syndrome, Arthritis, Parkinson’s disease, tremors, fatigue, and Cerebral Palsy. It may also serve as a solution for people with amputated limbs or fingers. For others, this could end up being a solution to situational impairments or a foundation for further research. This tool aims to help users feel independent and confident while using a computer system

    Real-World Eye-Tracking in Face-to-Face and Web Modes

    Get PDF
    Eye-tracking is becoming a popular tool for understanding how different forms of asking questions influence respondents' answers. Galesic et al. 2008 have successfully shown how primacy effects can be detected through an eye-tracker that measures the time of an eye fixation at different points of a question or response scale. Until now this method has almost exclusively been used to test questions on a computer. Our article extends the application of eye-tracking to face-to-face mode with show cards (that is, paper-and-pencil interviewing, or PAPI). Unlike eye-tracking on a computer screen, tracking eye movements in different modes requires using an innovative real-world eye-tracker that enables following eye movements anywhere the respondent may look. The current article reports on using a real-world eye-tracker to measure visual attention in two modes with visual materials?web and PAPI. We discuss the potential and limitations of the technique, provide successful examples of measuring and comparing visual attention in the two modes, and conclude with suggestions for avenues that can be studied using this new tool

    Research on Image Retrieval Optimization Based on Eye Movement Experiment Data

    Get PDF
    Satisfying a user's actual underlying needs in the image retrieval process is a difficult challenge facing image retrieval technology. The aim of this study is to improve the performance of a retrieval system and provide users with optimized search results using the feedback of eye movement. We analyzed the eye movement signals of the user’s image retrieval process from cognitive and mathematical perspectives. Data collected for 25 designers in eye tracking experiments were used to train and evaluate the model. In statistical analysis, eight eye movement features were statistically significantly different between selected and unselected groups of images (p < 0.05). An optimal selection of input features resulted in overall accuracy of the support vector machine prediction model of 87.16%. Judging the user’s requirements in the image retrieval process through eye movement behaviors was shown to be effective

    Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking

    Get PDF
    Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications

    Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking

    Get PDF
    Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.Comment: [Accepted version] In Proceedings of CHI Conference on Human Factors in Computing Systems (CHI '21), May 8-13, 2021, Yokohama, Japan. ACM, New York, NY, USA. 19 Pages. https://doi.org/10.1145/3411764.344557

    Design and Experimental Evaluation of a Context-aware Social Gaze Control System for a Humanlike Robot

    Get PDF
    Nowadays, social robots are increasingly being developed for a variety of human-centered scenarios in which they interact with people. For this reason, they should possess the ability to perceive and interpret human non-verbal/verbal communicative cues, in a humanlike way. In addition, they should be able to autonomously identify the most important interactional target at the proper time by exploring the perceptual information, and exhibit a believable behavior accordingly. Employing a social robot with such capabilities has several positive outcomes for human society. This thesis presents a multilayer context-aware gaze control system that has been implemented as a part of a humanlike social robot. Using this system the robot is able to mimic the human perception, attention, and gaze behavior in a dynamic multiparty social interaction. The system enables the robot to direct appropriately its gaze at the right time to the environmental targets and humans who are interacting with each other and with the robot. For this reason, the attention mechanism of the gaze control system is based on features that have been proven to guide human attention: the verbal and non-verbal cues, proxemics, the effective field of view, the habituation effect, and the low-level visual features. The gaze control system uses skeleton tracking and speech recognition,facial expression recognition, and salience detection to implement the same features. As part of a pilot evaluation, the gaze behavior of 11 participants was collected with a professional eye-tracking device, while they were watching a video of two-person interactions. Analyzing the average gaze behavior of participants, the importance of human-relevant features in human attention triggering were determined. Based on this finding, the parameters of the gaze control system were tuned in order to imitate the human behavior in selecting features of environment. The comparison between the human gaze behavior and the gaze behavior of the developed system running on the same videos shows that the proposed approach is promising as it replicated human gaze behavior 89% of the time
    corecore