7 research outputs found

    Toward mobile eye-based human-computer interaction

    No full text
    Current research on eye-based interfaces mostly focuses on stationary settings. However, advances in mobile eye-tracking equipment and automated eye-movement analysis now allow for investigating eye movements during natural behavior and promise to bring eye-based interaction into people's everyday lives. Recent developments in mobile eye tracking equipment point the way toward unobtrusive human-computer interfaces that will become pervasively usable in everyday life. The potential applications for the further capability to track and analyze eye movements anywhere and anytime calls for new research to develop and understand eye-based interaction in mobile daily life settings

    Multimodal recognition of reading activity in transit using body-worn sensors

    No full text
    Reading is one of the most well-studied visual activities. Vision research traditionally focuses on understanding the perceptual and cognitive processes involved in reading. In this work we recognize reading activity by jointly analyzing eye and head movements of people in an everyday environment. Eye movements are recorded using an electrooculography (EOG) system; body movements using body-worn inertial measurement units. We compare two approaches for continuous recognition of reading: String matching (STR) that explicitly models the characteristic horizontal saccades during reading, and a support vector machine (SVM) that relies on 90 eye movement features extracted from the eye movement data. We evaluate both methods in a study performed with eight participants reading while sitting at a desk, standing, walking indoors and outdoors, and riding a tram. We introduce a method to segment reading activity by exploiting the sensorimotor coordination of eye and head movements during reading. Using person-independent training, we obtain an average precision for recognizing reading of 88.9% (recall 72.3%) using STR and of 87.7% (recall 87.9%) using SVM over all participants. We show that the proposed segmentation scheme improves the performance of recognizing reading events by more than 24%. Our work demonstrates that the joint analysis of eye and body movements is beneficial for reading recognition and opens up discussion on the wider applicability of a multimodal recognition approach to other visual and physical activities

    Active Capacitive Sensing: Exploring a New Wearable Sensing Modality for Activity Recognition

    No full text
    Abstract. The paper describes the concept, implementation, and eval-uation of a new on-body capacitive sensing approach to derive activity related information. Using conductive textile based electrodes that are easy to integrate in garments, we measure changes in capacitance inside the human body. Such changes are related to motions and shape changes of muscle, skin, and other tissue, which can in turn be related to a broad range of activities and physiological parameters. We describe the physi-cal principle, the analog hardware needed to acquire and pre-process the signal, and example signals from different body locations and actions. We perform quantitative evaluations of the recognition accuracy, focused on the specific example of collar-integrated electrodes and actions, such as chewing, swallowing, speaking, sighing (taking a deep breath), as well as different head motions and positions.

    Active capacitive sensing: exploring a new wearable sensing modality for activity recognition

    Get PDF
    The paper describes the concept, implementation, and evaluation of a new on-body capacitive sensing approach to derive activity related information. Using conductive textile based electrodes that are easy to integrate in garments, we measure changes in capacitance inside the human body. Such changes are related to motions and shape changes of muscle, skin, and other tissue, which can in turn be related to a broad range of activities and physiological parameters. We describe the physical principle, the analog hardware needed to acquire and pre-process the signal, and example signals from different body locations and actions. We perform quantitative evaluations of the recognition accuracy, focused on the specific example of collar-integrated electrodes and actions, such as chewing, swallowing, speaking, sighing (taking a deep breath), as well as different head motions and positions

    In the Eyes of Young Children: A Study on Focused Attention to Digital Educational Games

    No full text
    corecore