855 research outputs found

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    A Robotic Neuro-Musculoskeletal Simulator for Spine Research

    Get PDF
    An influential conceptual framework advanced by Panjabi represents the living spine as a complex neuromusculoskeletal system whose biomechanical functioning is rather finely dependent upon the interactions among and between three principal subsystems: the passive musculoskeletal subsystem (osteoligamentous spine plus passive mechanical contributions of the muscles), the active musculoskeletal subsystem (muscles and tendons), and the neural and feedback subsystem (neural control centers and feedback elements such as mechanoreceptors located in the soft tissues) [1]. The interplay between subsystems readily encourages thought experiments of how pathologic changes in one subsystem might influence another--for example, prompting one to speculate how painful arthritic changes in the facet joints might affect the neuromuscular control of spinal movement. To answer clinical questions regarding the interplay between these subsystems the proper experimental tools and techniques are required. Traditional spine biomechanical experiments are able to provide comprehensive characterization of the structural properties of the osteoligamentous spine. However, these technologies do not incorporate a simulated neural feedback from neural elements, such as mechanoreceptors and nociceptors, into the control loop. Doing so enables the study of how this feedback--including pain-related--alters spinal loading and motion patterns. The first such development of this technology was successfully completed in this study and constitutes a Neuro-Musculoskeletal Simulator. A Neuro-Musculoskeletal Simulator has the potential to reduce the gap between bench and bedside by creating a new paradigm in estimating the outcome of spine pathologies or surgeries. The traditional paradigm is unable to estimate pain and is also unable to determine how the treatment, combined with the natural pain avoidance of the patient, would transfer the load to other structures and potentially increase the risk for other problems. The novel Neuro-Musculo

    Data Processing and Investigations for the GRACE Follow-On Laser Ranging Interferometer

    Get PDF
    This thesis presents first in-depth results of the Laser Ranging Interferometer (LRI) onboard the Gravity Recovery And Climate Experiment - Follow On (GRACE-Follow On) mission. The LRI is a novel instrument, which was developed in a U.S.-German collaboration including the Albert-Einstein Institute (AEI) in Hanover. It successfully demonstrated the feasibility of ranging measurements by means of laser interferometry between two distant spacecraft and will push space-borne gravimetry missions to the next sensitivity level. The author of this thesis contributed to this project by programming a comprehensive framework for ground-processing of LRI telemetry and analyzing various kinds of instrument data streams. Therefore, the title of this thesis covers both topics, data processing and investigations within the data. Within this thesis, an introduction to laser interferometry is given and the various payloads of the GRACE-Follow On satellites are presented. Furthermore, the design of the LRI itself is discussed, in order to understand the profound causal relations when getting into the details of investigations. The various kinds of telemetry data and their processing levels are presented, giving an insight about the variety of data sets, that are downlinked from the satellites. The investigations cover various major topics. These reach from different models to assess the absolute laser frequency, which sets the scale to convert the raw phase measurements into corresponding inter-satellite displacements, and comprise a detailed investigation of the carrier to noise ratio, which provides information about the signal quality. Furthermore, the laser’s beam properties in the far-field are investigated by means of the intensity and the phasefront. These investigations even lead to a proposal for a new scan pattern, which has actually been performed. Last but not least, a comprehensive assessment of the LRI spectrum was performed, which reveals correlation between the satellite’s attitude and orbit control system (AOCS), i.e. the star cameras for attitude determination and thruster activations for attitude control, and the ranging signal, measured by the LRI. In summary, this thesis is concerned with several aspects of the LRI characterization and data analysis. Since the overall data quality and sensitivity of the LRI exceeds the needs and expectations for the current gravimetric mission, many of the discussed effects are rather of academic interest, e.g. to deepen the instrument understanding of the LRI team and for the development of future missions in the field of geodesy or the space-based gravitational wave detection (LISA mission)

    EEG-assisted retrospective motion correction for fMRI: E-REMCOR

    Full text link
    We propose a method for retrospective motion correction of fMRI data in simultaneous EEG-fMRI that employs the EEG array as a sensitive motion detector. EEG motion artifacts are used to generate motion regressors describing rotational head movements with millisecond temporal resolution. These regressors are utilized for slice-specific motion correction of unprocessed fMRI data. Performance of the method is demonstrated by correction of fMRI data from five patients with major depressive disorder, who exhibited head movements by 1-3 mm during a resting EEG-fMRI run. The fMRI datasets, corrected using eight to ten EEG-based motion regressors, show significant improvements in temporal SNR (TSNR) of fMRI time series, particularly in the frontal brain regions and near the surface of the brain. The TSNR improvements are as high as 50% for large brain areas in single-subject analysis and as high as 25% when the results are averaged across the subjects. Simultaneous application of the EEG-based motion correction and physiological noise correction by means of RETROICOR leads to average TSNR enhancements as high as 35% for large brain regions. These TSNR improvements are largely preserved after the subsequent fMRI volume registration and regression of fMRI motion parameters. The proposed EEG-assisted method of retrospective fMRI motion correction (referred to as E-REMCOR) can be used to improve quality of fMRI data with severe motion artifacts and to reduce spurious correlations between the EEG and fMRI data caused by head movements. It does not require any specialized equipment beyond the standard EEG-fMRI instrumentation and can be applied retrospectively to any existing EEG-fMRI data set.Comment: 19 pages, 10 figures, to appear in NeuroImag

    Embedded Eye-Gaze Tracking On Mobile Devices

    Get PDF
    The eyes are one of the most expressive non-verbal tools a person has and they are able to communicate a great deal to the outside world about the intentions of that person. Being able to decipher these communications through robust and non-intrusive gaze tracking techniques is increasingly important as we look toward improving Human-Computer Interaction (HCI). Traditionally, devices which are able to determine a user's gaze are large, expensive and often restrictive. This work investigates the prospect of using common mobile devices such as tablets and phones as an alternative means for obtaining a user's gaze. Mobile devices now often contain high resolution cameras, and their ever increasing computational power allows increasingly complex algorithms to be performed in real time. A mobile solution allows us to turn that device into a dedicated portable gaze-tracking device for use in a wide variety of situations. This work specifically looks at where the challenges lie in transitioning current state-of-the-art gaze methodologies to mobile devices and suggests novel solutions to counteract the specific challenges of the medium. In particular, when the mobile device is held in the hands fast changes in position and orientation of the user can occur. In addition, since these devices lack the technologies typically ubiquitous to gaze estimation such as infra-red lighting, novel alternatives are required that work under common everyday conditions. A person's gaze can be determined through both their head pose as well as the orientation of the eye relative to the head. To meet the challenges outlined a geometric approach is taken where a new model for each is introduced that by design are completely synchronised through a common origin. First, a novel 3D head-pose estimation model called the 2.5D Constrained Local Model (2.5D CLM) is introduced that directly and reliably obtains the head-pose from a monocular camera. Then, a new model for gaze-estimation is introduced -- the Constrained Geometric Binocular Model (CGBM), where the visual ray representing the gaze from each eye is jointly optimised to intersect a known monitor plane in 3D space. The potential for both is that the burden of calibration is placed on the camera and monitor setup, which on mobile devices are fixed and can be determined during factory construction. In turn, the user requires either no calibration or optionally a one-time estimation of the visual offset angle. This work details the new models and specifically investigates their applicability and suitability in terms of their potential to be used on mobile platforms

    3D Gaze Estimation from Remote RGB-D Sensors

    Get PDF
    The development of systems able to retrieve and characterise the state of humans is important for many applications and fields of study. In particular, as a display of attention and interest, gaze is a fundamental cue in understanding people activities, behaviors, intentions, state of mind and personality. Moreover, gaze plays a major role in the communication process, like for showing attention to the speaker, indicating who is addressed or averting gaze to keep the floor. Therefore, many applications within the fields of human-human, human-robot and human-computer interaction could benefit from gaze sensing. However, despite significant advances during more than three decades of research, current gaze estimation technologies can not address the conditions often required within these fields, such as remote sensing, unconstrained user movements and minimum user calibration. Furthermore, to reduce cost, it is preferable to rely on consumer sensors, but this usually leads to low resolution and low contrast images that current techniques can hardly cope with. In this thesis we investigate the problem of automatic gaze estimation under head pose variations, low resolution sensing and different levels of user calibration, including the uncalibrated case. We propose to build a non-intrusive gaze estimation system based on remote consumer RGB-D sensors. In this context, we propose algorithmic solutions which overcome many of the limitations of previous systems. We thus address the main aspects of this problem: 3D head pose tracking, 3D gaze estimation, and gaze based application modeling. First, we develop an accurate model-based 3D head pose tracking system which adapts to the participant without requiring explicit actions. Second, to achieve a head pose invariant gaze estimation, we propose a method to correct the eye image appearance variations due to head pose. We then investigate on two different methodologies to infer the 3D gaze direction. The first one builds upon machine learning regression techniques. In this context, we propose strategies to improve their generalization, in particular, to handle different people. The second methodology is a new paradigm we propose and call geometric generative gaze estimation. This novel approach combines the benefits of geometric eye modeling (normally restricted to high resolution images due to the difficulty of feature extraction) with a stochastic segmentation process (adapted to low-resolution) within a Bayesian model allowing the decoupling of user specific geometry and session specific appearance parameters, along with the introduction of priors, which are appropriate for adaptation relying on small amounts of data. The aforementioned gaze estimation methods are validated through extensive experiments in a comprehensive database which we collected and made publicly available. Finally, we study the problem of automatic gaze coding in natural dyadic and group human interactions. The system builds upon the thesis contributions to handle unconstrained head movements and the lack of user calibration. It further exploits the 3D tracking of participants and their gaze to conduct a 3D geometric analysis within a multi-camera setup. Experiments on real and natural interactions demonstrate the system is highly accuracy. Overall, the methods developed in this dissertation are suitable for many applications, involving large diversity in terms of setup configuration, user calibration and mobility

    Modelling the dynamic flight behaviour of birds in different frames of reference

    Get PDF
    In this thesis I consider two aspects – energetics and guidance – of two dynamic flight behaviours performed by birds – dynamic soaring and prey pursuit. Uniting the thesis is the collection and modelling of bird trajectory data in different frames of reference to make inferences on dynamic flight behaviour. In particular, I collect data in a camera-fixed reference frame to model the dynamic soaring flight trajectories of Manx shearwater in an aerodynamic reference frame, whereas I model the attack trajectories of Harris’ hawks in both an inertial and a background frame of reference using data collected in an Earth-fixed frame of reference. The output of my investigation into the energetics of dynamic soaring is the first empirical demonstration of dynamic soaring outside the albatrosses, the formulation of a new metric for identifying and quantifying dynamic soaring, and the demonstration that the large-scale distribution of the Manx shearwater is affected by their dynamic soaring behaviour. The output of my investigation into the guidance of prey pursuit is the finding that Harris’ hawk attack trajectories are well modelled by the proportional navigation (PN) guidance law commonly used by homing missiles. However, I also show that a guidance law that can be mechanised using only visual information – rather than the inertial and visual information required by PN – also successfully fits attack trajectory data. Finally, I propose a method for analysing eye-in-head movements during dynamic flight in birds, and I find that Harris’ hawks limit eye-in-head movement during terminal pursuit, a necessary condition to implement PN guidance. In being reflective about the reference frames in which I model bird behaviour and, in cases, by modelling the same data in different reference frames, I expose the utility of the reference frame concept in analysing biological systems
    • …
    corecore