193 research outputs found

    Statistical Methods to Measure Reading Progression Using Eye-Gaze Fixation Points

    Get PDF
    In this thesis, we investigate methods to accurately track reading progression by analyzing eye-gaze fixation points, using commercially available eye tracking devices and without the imposition of unnatural movement constraints. In order to obtain the most accurate eye-gaze fixation point data possible, the current state of the art relies on expensive, cumbersome apparatuses. Eye-gaze tracking using less expensive hardware, and without constraints imposed on the individual whose gaze is being tracked, results in less reliable, noise-corrupt data which proves difficult to interpret. Extending the accessibility of accurate reading progression tracking beyond its current limits and enabling its feasibility in a real-world, constraint-free environment will enable a multitude of futuristic functionalities for educational, enterprise, and consumer technologies. We first discuss the ``Line Detection System\u27\u27 (LDS), a Kalman filter and hidden Markov model based algorithm designed to infer from noisy data the line of text associated with each eye-gaze fixation point reported every few milliseconds during reading. This system is shown to yield an average line detection accuracy of 88.1\%. Next, we discuss a ``Horizontal Saccade Tracking System\u27\u27 (HSTS) which aims to track horizontal progression within each line, using a least squares approach to filter out noise. Finally, we discuss a novel ``Slip-Kalman\u27\u27 filter which is custom designed to track the progression of reading. This method improves upon the original LDS, performing at an average line detection accuracy of 97.8\%, and offers advanced capability in horizontal tracking compared to the HSTS. The performance of each method is demonstrated using 25 pages worth of data collected during readin

    Approaches for Eye-Tracking While Reading

    Get PDF
    In this thesis, we developed an algorithm to detect the correct line being read by participants. The comparisons of the reading line classification algorithms are demonstrated using eye-tracking data collected from a realistic reading experiment in front of a low-cost desktop-mounted eye-tracker. With the development of eye-tracking techniques, research begins to aim at trying to understand information from the eyes. However, state of the art in eye-tracking applications is affected by a large amount of measurement noise. Even the expensive eye-trackers still suffer significant noise. In addition, the inherent characteristics of gaze movement increase the difficulty of obtaining valuable information from gaze measurements. We first discussed an improved Kalman smoother called slip-Kalman smoother, designed to separate eye-gaze data corresponding to correct text lines and reduce measurement noise. Next, two different classifiers are applied to be trained; one is Gaussian discriminant based while the other is support vector machine based. As a result, our algorithm improved the performance of eye gaze classification in the reading scenario and beat the previous method

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Robust ego-localization using monocular visual odometry

    Get PDF

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    MECHANISMS OF GAZE STABILITY DURING WALKING: BEHAVIORAL AND PHYSIOLOGICAL MEASURES RELATING GAZE STABILITY TO OSCILLOPSIA

    Get PDF
    Visual sensory input plays a significant role in maintaining upright posture during walking. Visual input contributes to control of head, trunk, and leg motion during walking to facilitate interaction with and avoidance of objects and individuals in the environment. The vestibular system contributes to postural control during walking and also to stabilization of the eyes during head motion which may allow for more accurate use of visual information. This dissertation reports the findings of five experiments which explore how the nervous system uses vision to control upright posture during walking and also whether the act of walking contributes to gaze stability for individuals with severe vestibular loss. In the first experiment, continuous oscillatory visual scene motion was used to probe how the use of visual input changes from standing to walking and also to determine whether the trunk motion response to visual motion was the same in the medio-lateral (ML) and anterior-posterior (AP) directions. In the second experiment, visual feedback (VFB) regarding the approximate center of mass position in the ML and AP directions was used to demonstrate that ML path stability was enhanced by concurrent visual feedback for young and older adults. In the third experiment, adults with vestibular loss and healthy adults were both able to use VFB during treadmill walking to enhance ML path stability and also to separately modify their trunk orientation to vertical. The final two experiments investigated whether gaze stability was enhanced during treadmill walking compared to passive replication of sagittal plane walking head motion (seated walking) for individuals with severe vestibular loss. Individuals with severe bilateral vestibular hypofunction displayed appropriately timed eye movements which compensated for head motion during active walking compared to seated walking. Timing information from the task of active walking may have contributed to enhancement of gaze stability that was better than predictions from passive head motion. This dissertation demonstrates: 1) the importance of visual sensory input for postural control during walking; 2) that visual information can be leveraged to modify trunk and whole body walking behavior; and 3) that the nervous system may leverage intrinsic timing information during active walking to enhance gaze stability in the presence of severe vestibular disease

    Camera-based estimation of student's attention in class

    Get PDF
    Two essential elements of classroom lecturing are the teacher and the students. This human core can easily be lost in the overwhelming list of technological supplements aimed at improving the teaching/learning experience. We start from the question of whether we can formulate a technological intervention around the human connection, and find indicators which would tell us when the teacher is not reaching the audience. Our approach is based on principles of unobtrusive measurements and social signal processing. Our assumption is that students with different levels of attention will display different non-verbal behaviour during the lecture. Inspired by information theory, we formulated a theoretical background for our assumptions around the idea of synchronization between the sender and receiver, and between several receivers focused on the same sender. Based on this foundation we present a novel set of behaviour metrics as the main contribution. By using a camera-based system to observe lectures, we recorded an extensive dataset in order to verify our assumptions. In our first study on motion, we found that differences in attention are manifested on the level of audience movement synchronization. We formulated the measure of ``motion lag'' based on the idea that attentive students would have a common behaviour pattern. For our second set of metrics we explored ways to substitute intrusive eye-tracking equipment in order to record gaze information of the entire audience. To achieve this we conducted an experiment on the relationship between head orientation and gaze direction. Based on acquired results we formulated an improved model of gaze uncertainty than the ones currently used in similar studies. In combination with improvements on head detection and pose estimation, we extracted measures of audience head and gaze behaviour from our remote recording system. From the collected data we found that synchronization between student's head orientation and teacher's motion serves as a reliable indicator of the attentiveness of students. To illustrate the predictive power of our features, a supervised-learning model was trained achieving satisfactory results at predicting student's attention
    corecore